0% found this document useful (0 votes)
11 views16 pages

Topic 6 Random Processes and Signals: 6.1 Review of Probability

The document discusses the differences between deterministic and random signals, emphasizing the importance of understanding random processes in various fields, particularly in engineering and physics. It covers concepts such as probability distribution functions, Gaussian distributions, autocorrelation, stationarity, ergodicity, and power spectral density, providing mathematical definitions and examples. Additionally, it outlines a systematic approach to analyzing the response of systems to random signals, including mean and variance calculations.

Uploaded by

numanraza899
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views16 pages

Topic 6 Random Processes and Signals: 6.1 Review of Probability

The document discusses the differences between deterministic and random signals, emphasizing the importance of understanding random processes in various fields, particularly in engineering and physics. It covers concepts such as probability distribution functions, Gaussian distributions, autocorrelation, stationarity, ergodicity, and power spectral density, providing mathematical definitions and examples. Additionally, it outlines a systematic approach to analyzing the response of systems to random signals, including mean and variance calculations.

Uploaded by

numanraza899
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Topic 6

Random Processes and Signals


Recall our description in Lecture 1 of random vs deterministic signals.
A deterministic signal is one that can be described by a function, mapping, or
some other recipe or algorithm. If you know t , you can work out f (t ). On the
other hand, a random signal is determined by some underyling random process.
Although its statistical properties might be known (e.g. you might know its mean
and variance) you cannot evaluate its value at time t . You might by able to say
something about the likelihood of its taking some value at time t , but not more.
Why are we particularly bothered with random signals?
First, what we usually regard as noise are random signals. Being able to determine
their properties in the frequency domain can be very useful in diminishing their
e ect on information-bearing signals.
Second, the signals from a random process might be the information-bearing sig-
nals themselves. For example, signals generate from random solar activity might
be regarded as noise to the satellite communications engineer, but might tell the
solar physicist about underlying nuclear processes in the sun.
However, you justR 1may have noticed that all our analysis has involved integration
over all time | 1 dt abounds. While this is ne for a deterministic signal, for
a random signal it implies having to wait for all time collecting the random signal
before we can work out, say, its auto-correlation.
Fortunately there is a way around this diculty.
6.1 Review of probability

Consider a random variable x . The probability distribution function underlying the


values of x is de ned by
Px () = Prob[x  ]
The characteristics of a distribution function is that it varies from 0 at 1 to 1
at 1 and has a non-negative gradient.
1
6/2

The probability density function (pdf) px () is


px ()d = Prob[ < x   + d]
or
d
px () = d Px () :
For all , px ()  0 and 11 px ()d = 1.
R

It is often convenient to write p(x )

p(x)
P (x ) 1

x x
0 ξ ξ+δξ
(a) (b)

Figure 6.1: (a) A probability distribution function. (b) A probability density function or pdf.

6.2 Example: Gaussian pdf

A 1-d Gaussian pdf with mean  and variance 2 is


1
p(x ) = p exp
( x  ) 2
 

2 22
The probability distribution function is
Z x
P (x ) = p(X )dX = erf(x ) :
1

6.3 Random processes and ensembles

The random variable x might be regarded as random in itself, or regarded as the


mapping from the output  of a random experiment x  x ().
We will be concerned with random processes x (t )  x (; t ), which are functions
of time. When we refer to x (t1) or x (t1; ) it is important to realize that this is
not the unique value of the random variable at an ensemble of values that could
be obtained in repeated trials, as sketched in Figure 6.2.
6/3

t
t
t
t

t
1

Figure 6.2: Ensembles

Thus x (t1) has an expected value of


Z Z
E [x (t1)] = x (t1)p(x )dx = x (t1; )p()d :
x 

This is called an ensemble average.


Just suppose that E [x (t )] was the same for any value of t . It might be re-written
as just E [x ]. A very reasonable question to ask is whether this is the same as the
temporal average

lim 1 Z T =2
x (t )dt :
T !1 T T =2

We will return to this shortly.

6.4 The ensemble autocorrelation

The basic \ensemble" de nition of the autocorrelation of a random process in-


volves both of the times t1 and t2 that samples are made
Rx x (t1; t2) = E [x (t1)x (t2)] :
6.4.1 Stationarity
If it is found that the expectation value E [x (t )] is independent of time, and that
the ensemble value for autocorrelation depends only on the di erence between t2
and t1, then the random process is said to be stationary (in the wide sense | this
means weakly).
6/4

For a (wide-sense) stationary process, the ensemble autocorrelation is


Z 1
Rx x ( ) = E [x (t )x (t +  )] = x (t; )x (t + ; )p()d :
1
Many engineering processes are stationary.
Notice anything about the ensemble autocorrelation? The ensemble autocorrela-
tion has units of power, and the proper comparison now is with the de nition of
autocorrelation for power signals | those which do not necessarily have a Fourier
transform, but do have a Power Spectral Density.

E[x(t)x(t+ τ)]
is a
POWER

6.4.2 Ergodicity
We now have two viable de nitions of the autocorrelation function of a stationary
random process. One is the ensemble average
RxEx ( ) = E [x (t )x (t +  )] ;
and the other is the integral over time
RxTx = Tlim
1 Z T =2
x (t )x (t +  )dt :
!1 T T =2
Both are perfectly proper things to evaluate. In the rst case, you are taking a
snapshot at one time of a whole ensemble of processes, and in the second you are
looking at the behaviour of one process over all time.
It the results are the same, the process is ergodic.
An ergodic random process
is a stationary random process whose ensemble autocorrelation is identical
with the temporal autocorrelation.
6/5

6.4.3 Ergodicity exempli ed


A village has two pubs. Suppose you wished to know which was the more popular.
The ensemble approach would be to go into both on the same evening and count
the number of people in each. The temporal approach would be follow one villager
for many evenings and log which (s)he chose to go to.
If you reached the same result, you will have shown the drinking preferences of
villagers are an ergodic process. Of course, humans rarely behave ergodically, but
engineering systems do.

6.5 Power Spectral Density

The power spectral density S (!) of a stationary random process is de ned exactly
as in Lecture 4.
Sx x (!) = FT [Rx x ] :
We will be concerned exclusively with ergodic processes, so that the above state-
ment does not give rise to ambiguity.

6.6 Descriptors of a random process

We are about at the point where we can discuss how to analyze the response of
systems to random processes. However, it is evident that we cannot give \exact"
results for such processes in the time domain.
We will be able to make statements about the the autocorrelation and power
spectral density, and in addition have access to other statistical descriptors, such
as the mean and variance.
As reminders, and for completeness, the mean is
Z
 = E [x ] = x ()p()d :


The variance is
2 = E [(x )2] = E [x 2] (E [x ])2 ;
where
Z
E [x 2] = x 2()p()d :

6/6

Zero−Mean Random Noise


1

0.5

−0.5

−1
0 100 200 300 400 500
time (milliseconds)

Figure 6.3: White noise

6.6.1 Application to a white noise process


White noise describes a random process whose mean is zero and whose autocor-
relation is a delta-function. Can we understand why it is so?
First, because
Rx x (0) = Tlim
1 Z T =2
jx (t )j2dt
!1 T T =2
it is obvious that Rx x (0) for any non-zero signal is nite.
Now white noise has the property it is equally likely to take positive or negative
values from instant to instant. So when you shift the signal by even an in nitesimal
amount to nd
Rx x ( ) = Tlim
1 Z T =2
x (t )x (t +  )dt
!1 T T =2
the individual products x (t )x (t +  ) being integrated are equally likely to be
positive and negative, large and small. Summing these by integration must give
zero. Thus Rx x ( ) = A( ).
The question now is what is A? The earlier expression actually indicates Rx x (0) =
E [x 2]. But for a process with  = 0,
2 = E [x 2] 2 = E [x 2] :
So A = 2.
6/7

For zero-mean white noise with variance 2,


Rx x ( ) = 2( ) , Sx x (!) = 2
The power spectral density, which is the Fourier transform of the autocorrelation
function, is uniform and has the value of the signal's variance.
6.7 Response of system to random signals

The nal part of the story is to work out how systems a ect the descriptors.
6.7.1 Mean
Given a temporal input x (t ) the output y (t ) is,
as ever,
x(t) *h(t) y(t)
y (t ) = xZ(t )  h(t )
1
= x ( )h(t  )d
1 X( ω ) H( ω ) Y( ω )

The expectation value (mean) of the output is


Figure 6.4: Random input x (t ) and ran-
Z 1
dom output y (t ). The system however
E [y ] = E [ x ( )h(t  )d ] is deterministic!
Z 1 1Z 1
= x ( )h(t  )d p()d
Z 1 1 Z= 11
=

= x ( )p()d h(t  )d
=
Z 1 1 = 1
= E [x ( )]h(t  )d
= 1

But for a stationary function E [x ( )] is independent of  | it is the constant


E [x ]. Taking this outside the integral, and then substituting p = t  , we nd
that
Z 1 Z 1 Z 1
E [y ] = E [x ] h(t  )d = h(p) dp = h(p) dp :
1 1 1
For a hint as how to proceed, we might guess how the mean should transform. As
it is an unchanging d.c. value, we might expect E [y ] = E [x ]H(0) ...
6/8

But we know that FT [h] = H(!), so that


Z 1 Z 1
h(p)e i!p
dp = H ( ! ) ) h(p)e i0p
dp = H(0) :
1 1Z
1
) h(p)dp = H(0) :
1

For a stationary random input x (t ) the mean of the output is


E [y ] = E [x ]H(0) :
which also means that the output is stationary.
6.7.2 Variance
To recover the variance of a signal after passing through a system we take a
di erent approach.
First recall that for an ergodic signal, Rx x (0) = E [x 2]. As 2 = E [x 2] (E [x ])2,
knowledge of the mean and autocorrelation of x (t ) allow the variance of the input
to be found.
Now recall the Wiener-Khinchin Theorem for in nite energy signals. It said:
The Fourier Transform of the Auto-Correlation is the Power Spectral Density
FT [Rx x ( )] = Sx x (!)
If the random process x (t ) is stationary and ergodic, this statement must still hold
good. It must hold good for the output y (t ) too.
FT [Ry y ( )] = Sy y (!)
Hence, as proved at the end of this lecture, for any power signal
The Power Spectral Densities of ouput and input from a system with
transfer function H (!) are related by
Sy y (!) = jH(!)j2 Sx x (!) where; NB jH(!)j2 = H(!)H(!):

As
1 Z 1
Ry y ( ) = 2 Sy y (!) ei! d! :
1
6/9

it follows immediately that


Z 1 Z 1
1
Ry y (0) = 2
1
Sy y (!) d! = 2 jH(!)j2Sx x (!) d! :
1 1
So, if we know the autocorrelation of the input, and the transfer function, we can
nd the autocorrelation and E [y 2] of the output. With knowledge of the mean
E [y ] derived in the previous subsection, we can determine the variance of the
output.
6.8 The complete recipe

Here is the recipe, given mean x and autocorrelation Rx x of the input:


 use mean and autocorrelation to nd variance.
 take FT of autocorrelation to nd the PSD Sx x of input
 multiply by jH(!)j2 to nd the PSD Sy y of output
 take the IFT at  = 0 to nd Ry y (0).
 nd mean of output y = H(0)x
 nd variance y2 = Ry y (0) 2y
6.9 | Example

[Q] White noise with zero mean and variance 2 is input to the circuit in the
Figure. Sketch the input's autocorrelation and power spectral density, derive the
mean and variance of the output, and sketch the output's autocorrelation and
power spectral density.
R

x(t) C y(t)

Figure 6.5:
6/10

[A] The input x (t ) is zero mean white noise, with a variance of 2. Thus
Rx x ( ) = 2( ) , Sx x (!) = FT 2( ) = 2 :
 

The transfer function of the RC circuit in the Figure is


(1=j!C ) 1
H(!) = R + (1=j!C ) = 1 + j!RC :

The mean of the output is


E [y ] = E [x ]H(0) = 0  1 = 0 :
The power spectral density of the output is
2
Sy y (!) = Sx x jH(!)j = 1 + !2(RC )2
2

Variance only ... If we were interested only in the variance of y (t ) we might proceed
by writing
1 Z 1 1 Z 1
1
E [y ] = Ry y (0) = 2
2
Sy y (!)d! = 2  2
d!
1 1 1 + ! 2 (RC )2
1 2 1
(substitute !RC = tan ) = 2  RC
Z =2
sec2

d
=2 sec 
2

2
= 2RC :

The variance of y is therefore


2
y = E [y ] (E [y ]) = 2RC :
2 2 2
6/11

But to work out the complete autocorrelation we can


Autocorrelation function...

use the standard FT pair from HLT and set a = 1


e ajt j , a2 2+a!2 ) 12 e jt j , 1 +1 !2 :
Now use the parameter scaling or similarity property from Lecture 2 with =
1=RC . It gives
f (t=RC ) , jRC jF (!RC )
so that
1 e jt=RCj , 1
2jRC j 1 + !2(RC )2 :
Finally, given that RC > 0
2 j j=RC 2
Ry y ( ) = 2RC e , 1 + !2(RC )2 = Sy y (!) :
Of course it is a relief to see that Ry y (0) = 2=2RC as we found earlier.

R xx( τ ) S xx ( ω )

σ2 σ2

τ ω

R yy( τ ) S yy ( ω )
2
σ /2 RC 2
σ

τ ω

Figure 6.6: (a) Autocorrelation and PSD of input x (t ); (b) Autocorrelation and PSD of output
y (t )
6/12

Can we understand the curves? The power spectral density looks sensible.
The circuit was a low pass lter with 1st order roll-o . The lter will greatly
diminish higher frequencies, but not cut them o completely. Note that the power
at zero frequency remains at 2.
The auto-correlation needs a little more thought. We argued that the autocor-
relation of white noise was a delta-function because the next instant's value was
equally likely to be positive or negative, large or small | in other words there was
no restriction on the rate of change of the signal. But the low pass lter prevents
such rapid change | so the next instant's value is more likely to be similar to
this instant's. However, the correlation will fade away with time. This is exactly
what you see. Moreover, the decay in the autocorrelation has a decay constant of
1=RC .
6.10 | Example

[Q] Each pulse in a continuous binary pulse train has a xed duration, T , but takes
the value 0 or 1 with equal probability (P = 1=2), independently of the previous
pulse. An example of part of such a train is sketched below.
1
t
0
Figure 6.7:

1. If xi is the random variable representing the height of pulse i , calculate E [xi ],


E [xi2], and E [xi xj ] for i 6= j .
2. Find and sketch the auto-correlation function Rx x ( ) of the pulse train.
3. Hence nd and sketch the power spectral density Sx x (!) of the pulse train.
[A]
1. The pulse heights xi are 0 and 1, each with a probability of Pi = 1=2.
1 1 1
E [xi ] = xi Pi = 0  2 + 1  2 = 2
X

E [xi2] =
X 1 1 1
xi2Pi = 02  2 + 12  2 = 2
i
6/13

t+ τ

t
t+ τ

t
repeated trials
Figure 6.8:

Each pulse value xi is independent of any other


1 1 1
E [xi xj ] = E [xi ]  E [xj ] = 2  2 = 4

2. To nd the ensemble average, take the product x (t )x (t +  ) and average


over many random trials or experiments
Z
RxEx = x (t; )x (t + ; )p()d
= (0  0)p(x (t ) = 0; x (t +  ) = 0) + (0  1)p(x (t ) = 0; x (t +  ) = 1)
+ (1  0)p(x (t ) = 1; x (t +  ) = 0) + (1  1)p(x (t ) = 1; x (t +  ) = 1)
= p(x (t ) = 1; x (t +  ) = 1)

When  > T there is always an independent transition during interval  so


that
RxEx = p(x (t ) = 1; x (t +  ) = 1)
= p(x (t ) = 1)  p(x (t +  ) = 1)
= 1  1 = 1:
2 2 4
6/14

R xx

1/2

1/4

τ
−T T

Figure 6.9:

When 0    T ,
RxEx = p(x (t ) = 1; x (t +  ) = 1)
= p(x (t ) = 1; no # transition during following  )
= p(x (t ) = 1)  p(no # transition during following  )
= p(x (t ) = 1)  (1 p(# transition during following  ))
1 1 1 p(ANY transition during following  )

= 2 2
1
= 2 1 2 T 1   

The last step multiplies the frequency of transition 1=T by the time interval
 to get the probability of transition.
Finally we use the even symmetry property to complete the function
1 1
Rx x ( ) = 4 + 4 T ( )
where T ( ) is a triangle of unit height, half-width T .
3. The FT of the triangle of halfwidth T (Lec 2) is T sin(!T(!T=2)=2) and the FT of
2
2

unity (Lec 3) is FT [[] 1] = 2(!) so that the power spectral density


T sin2(!T=2) 
Sx x (!) = FT [[] Rx x ] = 4 (!T=2)2 + 2 (!)
6/15

S
xx

Figure 6.10:

6.11 Appendix: A proof of the transfer of the Power Spectrum

Given power signals x (t ) and y (t ) we must derive the result Sy y (!) = jH(!)j2Sx x (!)
without assuming the existence of X (!) and Y (!).
Start by writing the autocorrelation of the output

Ry y = Tlim
1 Z T
y (t )y (t +  )dt :
!1 2T T

As the output y (t ) is the convolution of the input with the impulse response
function
Z 1
y (t ) = x (t p)h(p) dp
1
Z 1
y (t +  ) = x (t +  q )h(q ) dq :
1

Inserting these into Ry y and changing the order of integration


Z 1 Z 1
Ry y ( ) = Tlim
1 Z T
x (t p)h(p) dp x ( t +  q ) h ( q ) d q dt
!1 2T T p= 1 q= 1
Z 1 Z 1
= lim 1 Z T
x (t p)x (t +  q ) dt h(q )h(p)dq dp
p= 1 q= 1 T !1 2 T T
6/16

Change the variable from t to t 0 = t p


Z 1 Z 1
1 Z T
 
Ry y ( ) = lim x ( t 0 )x (t 0 +  + p q ) dt 0 h(q )h(p )dq dp
T !1 2T
Zp=1 1 qZ= 11 T

= [Rx x ( + p q )] h(q ) dq h(p) dp
Zp=1 1 q= 1
= f[Rx x  h]( + p)g h(p) dp :
p= 1

The quantity in fg should be read as the convolution of Rx x and h evaluated at


( + p). Now replace p by p0
Z 1
Ry y ( ) = [Rx x  h](( p0) h( p0) dp0
p =1
0

= [[Rx x  h]( )  h(  )]
Taking Fourier Transforms of both sides we nd
Sy y (!) = FT [Ry y ( )]
= FT [Rx x  h] FT [h(  )]
= FT [Rx x ( )] FT [h( )] FT [h(  )]
= Sx x (!) H(!) H(!):

You might also like