Topic 6 Random Processes and Signals: 6.1 Review of Probability
Topic 6 Random Processes and Signals: 6.1 Review of Probability
p(x)
P (x ) 1
x x
0 ξ ξ+δξ
(a) (b)
Figure 6.1: (a) A probability distribution function. (b) A probability density function or pdf.
2 22
The probability distribution function is
Z x
P (x ) = p(X )dX = erf(x ) :
1
t
t
t
t
t
1
lim 1 Z T =2
x (t )dt :
T !1 T T =2
E[x(t)x(t+ τ)]
is a
POWER
6.4.2 Ergodicity
We now have two viable de nitions of the autocorrelation function of a stationary
random process. One is the ensemble average
RxEx ( ) = E [x (t )x (t + )] ;
and the other is the integral over time
RxTx = Tlim
1 Z T =2
x (t )x (t + )dt :
!1 T T =2
Both are perfectly proper things to evaluate. In the rst case, you are taking a
snapshot at one time of a whole ensemble of processes, and in the second you are
looking at the behaviour of one process over all time.
It the results are the same, the process is ergodic.
An ergodic random process
is a stationary random process whose ensemble autocorrelation is identical
with the temporal autocorrelation.
6/5
The power spectral density S (!) of a stationary random process is de ned exactly
as in Lecture 4.
Sx x (!) = FT [Rx x ] :
We will be concerned exclusively with ergodic processes, so that the above state-
ment does not give rise to ambiguity.
We are about at the point where we can discuss how to analyze the response of
systems to random processes. However, it is evident that we cannot give \exact"
results for such processes in the time domain.
We will be able to make statements about the the autocorrelation and power
spectral density, and in addition have access to other statistical descriptors, such
as the mean and variance.
As reminders, and for completeness, the mean is
Z
= E [x ] = x ()p()d :
The variance is
2 = E [(x )2] = E [x 2] (E [x ])2 ;
where
Z
E [x 2] = x 2()p()d :
6/6
0.5
−0.5
−1
0 100 200 300 400 500
time (milliseconds)
The nal part of the story is to work out how systems a ect the descriptors.
6.7.1 Mean
Given a temporal input x (t ) the output y (t ) is,
as ever,
x(t) *h(t) y(t)
y (t ) = xZ(t ) h(t )
1
= x ( )h(t )d
1 X( ω ) H( ω ) Y( ω )
As
1 Z 1
Ry y ( ) = 2 Sy y (!) ei! d! :
1
6/9
[Q] White noise with zero mean and variance 2 is input to the circuit in the
Figure. Sketch the input's autocorrelation and power spectral density, derive the
mean and variance of the output, and sketch the output's autocorrelation and
power spectral density.
R
x(t) C y(t)
Figure 6.5:
6/10
[A] The input x (t ) is zero mean white noise, with a variance of 2. Thus
Rx x ( ) = 2( ) , Sx x (!) = FT 2( ) = 2 :
Variance only ... If we were interested only in the variance of y (t ) we might proceed
by writing
1 Z 1 1 Z 1
1
E [y ] = Ry y (0) = 2
2
Sy y (!)d! = 2 2
d!
1 1 1 + ! 2 (RC )2
1 2 1
(substitute !RC = tan ) = 2 RC
Z =2
sec2
d
=2 sec
2
2
= 2RC :
R xx( τ ) S xx ( ω )
σ2 σ2
τ ω
R yy( τ ) S yy ( ω )
2
σ /2 RC 2
σ
τ ω
Figure 6.6: (a) Autocorrelation and PSD of input x (t ); (b) Autocorrelation and PSD of output
y (t )
6/12
Can we understand the curves? The power spectral density looks sensible.
The circuit was a low pass lter with 1st order roll-o . The lter will greatly
diminish higher frequencies, but not cut them o completely. Note that the power
at zero frequency remains at 2.
The auto-correlation needs a little more thought. We argued that the autocor-
relation of white noise was a delta-function because the next instant's value was
equally likely to be positive or negative, large or small | in other words there was
no restriction on the rate of change of the signal. But the low pass lter prevents
such rapid change | so the next instant's value is more likely to be similar to
this instant's. However, the correlation will fade away with time. This is exactly
what you see. Moreover, the decay in the autocorrelation has a decay constant of
1=RC .
6.10 | Example
[Q] Each pulse in a continuous binary pulse train has a xed duration, T , but takes
the value 0 or 1 with equal probability (P = 1=2), independently of the previous
pulse. An example of part of such a train is sketched below.
1
t
0
Figure 6.7:
E [xi2] =
X 1 1 1
xi2Pi = 02 2 + 12 2 = 2
i
6/13
t+ τ
t
t+ τ
t
repeated trials
Figure 6.8:
R xx
1/2
1/4
τ
−T T
Figure 6.9:
When 0 T ,
RxEx = p(x (t ) = 1; x (t + ) = 1)
= p(x (t ) = 1; no # transition during following )
= p(x (t ) = 1) p(no # transition during following )
= p(x (t ) = 1) (1 p(# transition during following ))
1 1 1 p(ANY transition during following )
= 2 2
1
= 2 1 2 T 1
The last step multiplies the frequency of transition 1=T by the time interval
to get the probability of transition.
Finally we use the even symmetry property to complete the function
1 1
Rx x ( ) = 4 + 4 T ( )
where T ( ) is a triangle of unit height, half-width T .
3. The FT of the triangle of halfwidth T (Lec 2) is T sin(!T(!T=2)=2) and the FT of
2
2
S
xx
Figure 6.10:
Given power signals x (t ) and y (t ) we must derive the result Sy y (!) = jH(!)j2Sx x (!)
without assuming the existence of X (!) and Y (!).
Start by writing the autocorrelation of the output
Ry y = Tlim
1 Z T
y (t )y (t + )dt :
!1 2T T
As the output y (t ) is the convolution of the input with the impulse response
function
Z 1
y (t ) = x (t p)h(p) dp
1
Z 1
y (t + ) = x (t + q )h(q ) dq :
1
= [[Rx x h]( ) h( )]
Taking Fourier Transforms of both sides we nd
Sy y (!) = FT [Ry y ( )]
= FT [Rx x h] FT [h( )]
= FT [Rx x ( )] FT [h( )] FT [h( )]
= Sx x (!) H(!) H(!):