Unit 4
Unit 4
Contents
1 Introduction 2
6 Ergodicity 22
6.1 Mean-Ergodic Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1
1 Introduction
In electrical systems, voltage or current waveforms are used as signals for collecting, trans-
mitting or processing information, as well as for controlling and providing power to a variety
of devices. These signals (voltage or current waveforms) are functions of time and are of two
classesâĂŤdeterministic and random. Deterministic signals can be described by the usual
mathematical functions with time t as the independent variable. But a random signal always
has some element of uncertainty associated with it and hence it is not possible to determine
its value exactly at any given point of time. However, we may be able to describe the random
signal in terms of its average properties such as the average power in the random signal, its
spectral distribution and the probability that the signal amplitude exceeds a given value.
The probabilistic model used for characterising a random signal is called a random process
or stochastic process.
A random variable (RV) is a rule (or function) that assigns a real number to every
outcome of a random experiment, while a random process is a rule (or function) that assigns
a time function to every outcome of a random experiment. For example, consider the random
experiment of tossing a die at t = 0 and observing the number on the top face. The sample
space of this experiment consists of the outcomes {1, 2, 3, . . . , 6}. For each outcome of the
experiment, let us arbitrarily assign a function of time t(0 ≤ t < ∞) in the following manner.
Outcome: 1 2 3 4 5 6
Function x1 (t) x2 (t) x3 (t) x4 (t) x5 (t) x6 (t)
of time: = −4 = −2 =2 =4 = −t/2 = t/2
The set of functions {x1 (t), x2 (t), . . . , x6 (t)} represents a random process.
Definition: A random process is a collection (or ensemble) of RVs {X(s, t)} that are
functions of a real variable, namely time t where s ∈ S (sample space) and t ∈ T (parameter
set or index set).
The set of possible values of any individual member of the random process is called state
space. Any individual member itself is called a sample function or a realisation of the process.
Note:
1. If s and t are fixed, then {X(s, t)} is a number.
2. If t is fixed, then {X(s, t)} is a random variable.
3. If s is fixed, then {X(s, t)} is a single time function.
4. If s and t are variables, then {X(s, t)} is a collection of random variables which are
time functions.
2
2. If T is discrete and S is continuous, the random process is called a continuous random
sequence. For example, if Xn represents the temperature at the end of the nth hour
of a day, then {Xn , 1 ≤ n ≤ 24} is a continuous random sequence, since temperature
can take any value in an interval and hence continuous.
4. If both T and S are continuous, the random process is called a continuous random
process. For example, if X(t) represents the maximum temperature at a place in the
interval (0, t), then {X(t)} is a continuous random process. In this context, the names
given above, the word ‘discrete’ or ‘continuous’ is used to refer to the nature of S and
the word ‘sequence’ or ‘process’ is used to refer to the nature of T .
+∞
R
Ensemble average in I order: E[X(t)] = X(t) f (x, t) dx
−∞
+∞
R
Ensemble average in II order: E[X(t) · X(t + τ )] = [X(t) · X(t + τ )] f (x, t) dx
−∞
In certain probability distributions or averages of the random variables forming a ran-
dom process {X(t)} not depends on time t, then the random process is called a stationary
process.
i.e.,
E[X(t)] = constant
3
i.e.,
Note: All the second order stationary process are first order stationary but the con-
verse need not be true.
• Stationary process (or) Strictly Stationary process (or) Strictly sense Sta-
tionary process [SSS processes]: If a random process is stationary to all order,
then the random process is said to strict sense stationary process.
• Evolutionary process: A random process X(t) that is not stationary in any sense
is called an evolutionary process.
= E[X(t)]
4
Now,
Z∞
Var[X(t + ∆)] = (x − E[X(t)])2 f (x, t + ∆) dx
−∞
Z∞
= (x − E[X(t)])2 f (x, t) dx [ by (1)]
−∞
= Var[X(t)]
π/2
1
Z
= cos(t + φ) dφ
π
−π/2
π/2
1 sin(t + φ)
=
π 1 −π/2
1 nh π i h π io
= sin t + − sin t −
π 2 2
1n h π io ∵ sin(t + 90) = cos t
= [cos (t)] − − sin −t
π 2 sin(t − 90) = − sin(90 − t)
1n h π io
= [cos t] + sin −t
π 2
1
= {cos t + cos (t)} [∵ sin(90 − t) = cos t]
π
1
= [2 cos t]
π
∴ E[X(t)] depends on time ‘t’.
Hence X(t) is not stationary.
Problem 4: Show that X(t) = A cos(ωt + θ) is not stationary if A and ω are
5
constants and θ is uniformly distributed in (0, π).
Zπ
1
∴ E[X(t)] = A cos(ωt + θ) dθ
π
0
A
= [sin(ωt + θ)]π0
π
Ah i
= [sin (ωt + π)] − [sin (ωt − 0)]
π
A
= [−sinωt − sin ωt] [∵ sin(ωt + 180) = − sin ωt]
π
−2A
= sin ωt
π
∴ E[X(t)] depends on time ‘t’.
Hence X(t) is not stationary.
Problem 5: Let X(t) = cos(ωt + θ), where θ is uniformly distributed in (−π, π).
Check whether X(t) is stationary or not? Also find first and second moments of
process.
6
First moment of the Random process is
Zπ
1
∴ E[X(t)] = cos(ωt + θ) dθ
2π
−π
1h iπ
= sin(ωt + θ)
2π −π
1h i
= sin (ωt + π) − sin (ωt − π)
2π
1h i ∵ sin(ωt+180) = − sin ωt
= − sin ωt + sin(π − ωt)
2π sin(ωt−π) = − sin(π−ωt)
1
= [− sin ωt + sin ωt]
2π
= 0 = a constant
= not a function of t
∴ X(t) is stationary.
∴ First moment of the Random process X(t) = E[X(t)] = 0
Second moment of the Random process is
1 1h i ∵ sin(2π+A) = sin(A)
= + sin(2ωt)−sin(2ωt)
2 8π sin(2π−A) = −sin(A)
1
= +0
2
1
=
2
1
∴ Second moment of the Random process X(t) = E [X 2 (t)] =
2
∴ X(t) is stationary w.r.t. to both I and II moment.
7
Note : Variance of the random process X(t) = E X 2 (t) − {E[X(t)]}2
1 1
= − (0)2 =
2 2
8
Proof : We have RXX (t1 , t2 ) = E[X(t1 ).X(t2 )] = E[X1 .X2 ]
where t2 = t1 + τ .
If there are no periodic components, as |τ | → ∞, X1 and X2 can be considered as
independent.
Since X1 and X2 are from the same random process at two different time instants,
E[X1 ] = E[X2 ] = E[X]
Note : If the random variables have zero mean, then the auto correlation function
RXX (t1 , t2 ) is zero. i.e.,
6. If the random process Z(t) = X(t) + Y (t) where X(t) and Y (t) are random processes,
then RZZ (τ ) = RXX (τ ) + RY Y (τ ) + RXY (τ ) + RY X (τ ).
Proof :
Consider RZZ (τ )
= E[Z(t).Z(t + τ )]
= E{[X(t) + Y (t)] · [X(t + τ ) + Y (t + τ )]}
= E{[X(t) · X(t + τ ) + X(t) · Y (t + τ ) + Y (t) · X(t + τ ) + Y (t) · Y (t + τ )]}
= E[X(t) · X(t+τ )]+E[X(t) · Y (t+τ )]+E[Y (t) · X(t+τ )]+E[Y (t) · Y (t+τ )]
= RXX (τ ) + RXY (τ ) + RY X (τ ) + RY Y (τ )
9
(2) RXX (t, t + τ ) = E[X(t) · X(t + τ )] = RXX (τ ) = a function of τ
(or)
RXX (t1 , t2 ) = E[X (t1 ) · X (t2 ))]
= a function of time difference (t2 − t1 ) (or) (t1 − t2 )
Note: Wide sense stationary process is also called as Weak sense stationary process(W.S.S.).
R.V.Upper
Z limit
Z2π
1
= A cos(ωt + θ) dθ
2π
0
Ah i2π
= sin(ωt + θ)
2π 0
Ah i
= sin (ωt + 2π) − sin (ωt − 0)
2π
Ah i
= sin ωt − sin ωt ∵ sin(ωt + 2π) = sin ωt
2π
A
= [0]
2π
=0
= a constant
10
A2 h i
= E cos [(ωt+θ)−(ωt+ωτ +θ)]+cos [(ωt + θ)+(ωt+ωτ +θ)]
2
1
(∵ cos A cos B = [cos(A−B)+cos(A+B)])
2
2 h
A i
= E cos [ωτ ] + cos [2ωt + ωτ + 2θ] (∵ cos(−ωt) = cos ωt)
2
A2 A2 h i
= cos [ωτ ] + E cos(2ωt + ωτ + 2θ) (3)
2 2
(∵ E[R.P. without r.v. ] = E[constant] = constant])
2π
A2 1
Z
2
= cos [ωτ ] + A cos [2ωt + ωτ + 2θ] dθ
2 2π
0
Z2π
A2 A2
= cos [ωτ ] + cos [2ωt + ωτ + 2θ] dθ
2 2π
0
2 2
2π
A A sin (2ωt + ωτ + 2θ)
= cos [ωτ ] +
2 2π 2 0
A2 A2
= cos [ωτ ] + [sin (2ωt + ωτ + 4π) − sin (2ωt + ωτ )]
2 4π
A2 A2
= cos [ωτ ] + [sin (2ωt + ωτ ) − sin (2ωt + ωτ )] (∵ sin(4π + A) = sin(A))
2 4π
2
A A2
= cos [ωτ ] + [0]
2 4π
A2
= cos [ωτ ] + 0
2
= a function of τ
= E [cos ωY + i sin ωY ]
= E [cos ωY ] + iE [sin ωY ]
11
and also φ(2) = 0
E [cos 2Y ] + iE [sin 2Y ] = 0
⇒ E [cos 2Y ] = 0, E [sin 2Y ] = 0 (2)
E[X(t)] = E[cos(λt + Y )]
= E[cos λt cos Y − sin λt sin Y ]
= cos λtE[cos Y ] − sin λtE[sin Y ]
h i
= cos λt(0) − sin λt(0) ∵ E(cos Y ) = 0, E(sin Y ) = 0
=0
12
Hence X(t) is WSS.
Let random process X(t) = B cos(50t + φ) where B and φ are independent
random variables. B is a random variable with mean ‘0’ and variance ‘1’ and φ
is uniformly distributed in interval (−π, π). Find mean and auto correlation of
process.
⇒ E B2 = 1
[∵ E(B) = 0]
φ = a r.v. an U.D. in (−π, π)
1 1
f (φ) = pdf of r.v.φ = =
π − (−π) 2π
13
1 1h i
= cos (50τ ) + sin(100t + 50τ ) − sin(100t + 50τ )
2 8π
1
= cos (50τ ) + 0
2
1
= cos (50τ )
2
Note : In the above problem, X(t) is WSS.
Show that the process X(t) = A cos λt + B sin λt is WSS, where A and B
are random variables if
(a) E(A) = E(B) = 0 (b) E(A2 ) = E(B 2 ) (c)E(AB) = 0.
(c) ⇒ E(AB) = 0
To show X(t) is WSS, we have to prove
(i)E[X(t)] = a constant
(ii)RXX (t, t + τ ) = a function of τ
(or)
RXX (t1 , t2 ) = a function of time difference
= E A2 cos λt1 cos λt2 + E [AB] cos λt1 sin λt2 + E [AB] cos λt2 sin λt1
14
= σ 2 cos λ (t1 − t2 ) (∵ cos(A − B) = cos A cos B + sin A sin B)
= σ 2 cos λτ
= a function of time difference
15
When r = 1
n→∞
X
E[X(t)] = nPn (t)
n=0
n→∞
X
= [nPn (t)]n=0 + nPn (t) (∵ by (1))
n=1
n→∞
(at)n−1
at X
= 0· + n
1 + at n=1
(1 + at)n+1
1 (at) (at)2
=0+1· + 2 · + 3 · + ···
(1 + at)2 (1 + at)3 (1 + at)4
" 2 #
1 at at
= 1+2 +3 + ···
(1 + at)2 1 + at 1 + at
−2
1 at
= 1− (∵ 1 + 2x + 3x2 + · · · = (1 − x)−2 )
(1 + at)2 1 + at
−2
1 1 + at − at
=
(1 + at)2 1 + at
−2
1 1
=
(1 + at)2 1 + at
1
= [1 + at]2
(1 + at) 2
=1 (3)
= a constant
substitute r = 2 in (2)
n→∞
X
2
n2 Pn (t)
E X (t) =
n=0
n→∞
X
2
n2 Pn (t) (∵ by (1))
= n Pn (t) n=0
+
n=1
n→∞
X
=0+ [n(n + 1) − n]Pn (t)
n=1
n→∞
X n→∞
X
= [n(n + 1)]Pn (t) − [n]Pn (t)
n=1 n=1
n→∞
(at)n−1
X
= [n(n + 1)] −1 (∵ by (3))
n=1
(1 + at)n+1
(at)2
1 (at)
= 1(2) + 2(3) + 3(4) + ··· − 1
(1 + at)2 (1 + at)3 (1 + at)4
16
" 2 #
2 at at
= 1+3 +6 + ··· − 1
(1 + at)2 1 + at 1 + at
−3
2 at
= 1− −1 (∵ 1 + 3x + 6x2 + · · · = (1 − x)−3 )
(1 + at)2 1 + at
−3
2 1 + at − at
= −1
(1 + at)2 1 + at
−3
2 1
= −1
(1 + at)2 1 + at
2
= [1 + at]3 − 1
(1 + at) 2
= 2(1 + at) − 1
= 2 + 2at − 1
= 1 + 2at
= a function of t
= 1 + 2at − (1)2
= 2at
17
1 1 1 1
= cos − cos + sin − sin (∵ by (1))
4 4 4 4
= 0 = a constant (2)
RXX (t, t + τ )
= E[X(t) · E(t + τ )]
= E[X (t, si ) · X (t + τ, si )]
i=4
X
= [X (t, si ) · X (t + τ, si )] · P [X (t, si )]
i=1
h i h i h i h i
= X (t, s1 )·X (t+τ, s1 ) · P X (t, s1 ) + X (t, s2 )·X (t+τ, s2 ) · P X (t, s2 )
h i h i h i h i
+ X (t, s3 )·X (t+τ, s3 ) · P X (t, s3 ) + X (t, s4 )·X (t+τ, s4 ) · P X (t, s4 )
1 1
= (cos t)[cos(t + τ )] + (− cos t)[− cos(t + τ )]
4 4
1 1
+ (sin t)[sin(t + τ )] + (− sin t)[− sin(t + τ )]
4 4
1 h i
= 2 cos t cos(t + τ ) + 2 sin t sin(t + τ ) (∵ by (1))
4
1 h i
= cos t cos(t + τ ) + sin t sin(t + τ )
2
1 h i
= cos(t − (t + τ )) (∵ cos(A − B) = cos A cos B + sin A sin B)
2
1h i
= cos(−τ )
2
1
= cos τ
2
= a function of τ (3)
18
5.1 Properties of cross correlation
1. Cross correlation function is an even function. i.e., RXY (τ ) = RXY (−τ ).
2. If X(t) and Y (t) are two random processes and RXX (τ ) and RY Y (τ ) are their
respective auto correlation functions, then RXY (τ )2 ≤ RXX (0) .RY Y (0).
Proof : By Schwartz’ inequality
1
3. If X(t) and Y (t) are two random processes, then |RXY (τ )| ≤ RXX (0) + RY Y (0) .
2
Proof :
Both are second moments also and we know that second moments are always positive.
Consider that the geometric mean of two positive quantities is less than their arithmetic
mean.
p 1
i.e., RXX (0) RY Y (0) ≤ [RXX (0) + RY Y (0)] (1)
2
p
W.K.T. |RXY (τ )| ≤ RXX (0) RY Y (0) (By property)
1
≤ [RXX (0) + RY Y (0)] (By (1))
2
1
∴ |RXY (τ )| ≤ [RXX (0) + RY Y (0)]
2
19
Proof :
5. If the random processes X(t) and Y(t) are of zero mean, then
lim RXY (τ ) = lim RY X (τ ) = 0.
|τ | → ∞ |τ | → ∞
Proof : Consider lim RXY (τ ) = lim E [X (t) Y (t + τ )]
|τ | → ∞ |τ | → ∞
As τ → ∞, X(t) and Y (t) can be considered as independent.
Similarly, Lt RY X (τ ) = 0
|τ | → ∞
Variance(A) = Variance(B)
E A − [E(A)]2 = E B 2 − [E(B)]2
2
E A2 − 0 = E B 2 − 0 (∵ by (1))
E A2 = E B 2 = σ 2 , say (2)
20
To show X(t) is jointly WSS, we have to prove
Now (i)(a)
Now (i)(b)
RXX (t1 , t2 )
= E [X (t1 ) · X (t2 )]
= E [(A cos λt1 + B sin λt1 ) · (A cos λt2 + B sin λt2 )]
= E A2 cos λt1 cos λt2 +ABcos λt1 sin λt2 +ABcos λt2 sin λt1 +B 2 sin λt1 sin λt2
2 h i
= E A cos λt1 cos λt2 +E AB cos λt1 sin λt2
h i h i
+E AB cos λt2 sin λt1 +E B 2 sin λt1 sin λt2
= σ 2 cos λt1 cos λt2 + 0 + 0 + σ 2 sin λt1 sin λt2 (∵ by (2) and (3))
2 2
= σ cos λt1 cos λt2 + σ sin λt1 sin λt2
= σ 2 cos λ (t1 − t2 ) (∵ cos(A − B) = cos A cos B + sin A sin B)
(or)
= σ 2 cos λτ (∵ τ = t1 − t2 or ∵ τ = t2 − t1 )
= a function of time difference
RXY (t1 , t2 )
= E [X (t1 ) · Y (t2 )]
= E [(A cos λt1 + B sin λt1 ) · (B cos λt2 − A sin λt2 )]
21
= E ABcos λt1 cos λt2 −A2 cos λt1 sin λt2 +B 2 sin λt1 cos λt2 −ABsin λt1 sin λt2
h i h i
= E AB cos λt1 cos λt2 −E A2 cos λt1 sin λt2
h i h i
+E B 2 sin λt1 cos λt2 −E AB sin λt1 sin λt2
= 0−σ 2 cos λt1 sin λt2 +σ 2 sin λt1 cos λt2 − 0 (∵ by (1) and (2))
2 2
= −σ cos λt1 sin λt2 + σ sin λt1 cos λt2
= −σ 2 [cos λt1 sin λt2 − sin λt1 cos λt2 ]
= −σ 2 [sin λ (t2 − t1 )] (∵ sin(A − B) = sin A cos B + cos A sin B)
(or)
= −σ 2 sin λτ
= a function of time difference
∴ (iii) proved.
Hence X(t) and Y (t) are jointly WSS.
6 Ergodicity
When we wish to take a measurement of a variable quantity in the laboratory, we
usually obtain multiple measurements of the variable and average them to reduce
measurement errors. If the value of the variable being measured is constant and errors
are due to disturbances (noise) or due to the instability of the measuring instrument,
then averaging is, in fact, a valid and useful technique. ‘Time averaging’ is an extension
of this concept, which is used in the estimation of various statistics of random processes.
We normally use ensemble averages (or statistical averages) such as the mean and
autocorrelation function for characterising random processes. To estimate ensemble
averages, one has to compute a weighted average over all the member functions of the
random process.
For example, the ensemble mean of a discrete random process {X(t)} is computed by
the formula X
µx = xi p i .
If we have access only to a single sample function of the process, then we use its
time-average to estimate the ensemble averages of the process.
Time-average: If {X(t)} is a random process, then
T
1
Z
X(t) dt
2T −T
22
Ergodic: A random process {X(t)} is said to be ergodic, if its ensemble averages are
equal to appropriate time averages. This definition implies that, with probability 1,
any ensemble average of {X(t)} can be determined from a single sample function of
{X(t)}.
Note Ergodicity is a stronger condition than stationarity and hence all random pro-
cesses that are stationary are not ergodic. Moreover, ergodicity is usually defined with
respect to one or more ensemble averages (such as mean and autocorrelation function)
as discussed below and a process may be ergodic with respect to one ensemble average
but not others.
Mean-ergodic Process: If the random process {X(t)} has a constant mean E{X(t)} =
µ and if Z T
1
XT = X(t) dt → µ as T → ∞,
2T −T
then {X(t)} is said to be mean-ergodic.
lim Var X T = 0.
T →∞
Proof:
T
1
Z
XT = X(t) dt
2T −T
Z T
1
∴ E(X T ) = E{X(t)} dt
2T −T
=µ
By Tchebycheff’s inequality
Var(XT )
(2)
P |X T − E(X T )| ≤ ε ≤ 1 −
ε2
23
i.e. limT →∞ X T = E{X(t)} with probability 1.
Note This theorem provides a sufficient condition for the mean-ergodicity of a ran-
dom process. That is, to prove the mean-ergodicity of {X(t)}, it is enough to prove
limT →∞ Var(X T ) = 0.
24