Autocorrelation of Random Processes: Michael Haag
Autocorrelation of Random Processes: Michael Haag
Autocorrelation of Random
Processes Version 2.4: 2002/07/09 00:00:00.003 GMT-5
Michael Haag
This work is produced by The Connexions Project and licensed under the
∗
Creative Commons Attribution License
Abstract
The module will explain Autocorrelation and its function and properties. Also,
examples will be provided to help you step through some of the more complicated
statistical analysis.
let us quickly review the idea of correlation . Recall that the correlation of two signals or
2
variables is the expected value of the product of those two variables. Since our focus will
be to discover more about a random process, a collection of random signals, then imagine
us dealing with two samples of a random process, where each sample is taken at a dierent
point in time. Also recall that the key property of these random processes is that they are
now functions of time; imagine them as a collection of signals. The expected value of the
3
product of these two variables (or samples) will now depend on how quickly they change
in regards to time. For example, if the two variables are taken from almost the same time
period, then we should expect them to have a high correlation. We will now look at a
correlation function that relates a pair of random variables from the same process to the
time separations between them, where the argument to this correlation function will be the
time dierence. For the correlation of signals from two dierent random process, look at
the crosscorrelation function . 4
1 Autocorrelation Function
The rst of these correlation functions we will discuss is the autocorrelation, where each
of the random variables we will deal with come from the same random process.
Denition 1: Autocorrelation
the expected value of the product of a random variable or signal realization with
a time-shifted version of itself
∗ http://creativecommons.org/licenses/by/1.0
1 http://cnx.rice.edu/content/m10649/latest/
2 http://cnx.rice.edu/content/m10673/latest/
3 http://cnx.rice.edu/content/m10656/latest/
4 http://cnx.rice.edu/content/m10686/latest/
http://cnx.rice.edu/content/m10676/latest/
Connexions module: m10676 2
With a simple calculation and analysis of the autocorrelation function, we can discover a
few important characteristics about our random process. These include:
1. How quickly our random signal or processes changes with respect to the time function
2. Whether our process has a periodic component and what the expected frequency might
be
As was mentioned above, the autocorrelation function is simply the expected value of a
product. Assume we have a pair of random variables from the same process, X1 = X (t1 )
and X2 = X (t2 ), then the autocorrelation is often written as
Rxx (t1 , t2 ) = E [X1RX2 ]
R∞ ∞
= −∞ −∞ x1 x2 f (x1 , x2 ) dx2 dx1
(1)
The above equation is valid for stationary and nonstationary random processes. For sta-
tionary processes , we can generalize this expression a little further. Given a wide-sense
5
stationary processes, it can be proven that the expected values from our random process
will be independent of the origin of our time function. Therefore, we can say that our auto-
correlation function will depend on the time dierence and not some absolute time. For this
discussion, we will let τ = t2 − t1 , and thus we generalize our autocorrelation expression as
Rxx (t, t + τ ) = Rxx (τ )
(2)
= E [X (t) X (t + τ )]
for the continuous-time case. In most DSP course we will be more interested in dealing
with real signal sequences, and thus we will want to look at the discrete-time case of the
autocorrelation function. The formula below will prove to be more common and useful than
Equation 1:
∞
(3)
X
Rxx [n, n + m] = (x [n] x [n + m])
n=−∞
And again we can generalize the notation for our autocorrelation function as
Rxx [n, n + m] = Rxx [m]
(4)
= E [X [n] X [n + m]]
http://cnx.rice.edu/content/m10676/latest/
Connexions module: m10676 3
• The autocorrelation function will have its largest value when τ = 0. This value can
appear again, for example in a periodic function at the values of the equivalent periodic
points, but will never be exceeded.
Rxx (0) ≥ |Rxx (τ ) |
• If we take the autocorrelation of a period function, then Rxx (τ ) will also be periodic
with the same frequency.
However, a lot of times we will not have sucient information to build a complete continuous-
time function of one of our random signals for the above analysis. If this is the case, we
can treat the information we do know about the function as a discrete signal and use the
discrete-time formula for estimating the autocorrelation.
N −m−1
1
(6)
X
xx [m] = (x [n] x [n + m])
N −m n=0
2 Examples
Below we will look at a variety of examples that use the autocorrelation function. We will
begin with a simple example dealing with Gaussian White Noise (GWN) and a few basic
statistical properties that will prove very useful in these and future calculations.
Example 1:
We will let x [n] represent our GWN. For this problem, it is important to remember
the following fact about the mean of a GWN function:
E [x [n]] = 0
Along with being zero-mean, recall that GWN is always independent. With these
two facts, we are now ready to do the short calculations required to nd the
autocorrelation.
Rxx [n, n + m] = E [x [n] x [n + m]]
Since the function, x [n], is independent, then we can take the product of the
individual expected values of both functions.
Rxx [n, n + m] = E [x [n]] E [x [n + m]]
Now, looking at the above equation we see that we can break it up further into two
conditions: one when m and n are equal and one when they are not equal. When
http://cnx.rice.edu/content/m10676/latest/
Connexions module: m10676 4
Figure 1: Gaussian density function. By examination, can easily see that the above
they are equal we can combine the expected values. We are left with the following
piecewise function to solve:
E [x [n]]E [x [n + m]] if m 6= 0
Rxx [n, n + m] =
E x2 [n] if m = 0
We can now solve the two parts of the above equation. The rst equation is easy
to solve as we have already stated that the expected value of x [n] will be zero. For
the second part, you should recall from statistics that the expected value of the
square of a function is equal to the variance. Thus we get the following results for
the autocorrelation:
0 if m 6= 0
Rxx [n, n + m] =
σ 2 if m = 0
Or in a more concise way, we can represent the results as
Rxx [n, n + m] = σ 2 δ [m]
http://cnx.rice.edu/content/m10676/latest/