Spectral Analysis Applied To Stochastic Processes: 2.0 Theorem of Wiener-Khintchine
Spectral Analysis Applied To Stochastic Processes: 2.0 Theorem of Wiener-Khintchine
will not be finite. Hence, the Fourier integral does not exist.
Remark: a large set of deterministic signals do also not meet the
condition of integrability; e.g. the time harmonic signals: sin(t), cos(t). To
allow spectral analysis, distribution functions have been introduced (Dirac
~); i.e.:
1
SIGNAL THEORY - version 2012
so that:
and, hence:
(in the next paragraph one will conclude that the periodogram is used
here).
Obviously, this equation does not contain information on the phase of
the signal. The Power PT(k) can be obtained then by integrating the power
density function over the entire frequency domain:
2
SIGNAL THEORY - version 2012
When the duration of the truncation interval T increases and in the limit
will tend to infinity, one will assume that the limit exists, i.e. remains finite:
so that:
with:
3
SIGNAL THEORY - version 2012
Substituting t1 = t2 + yields that dt1 = dt2, so that the limits of the integral
will tend from
and hence:
Since the autocorrelation function is even B() = B(-), one can rewrite
the relations as originally published by Wiener and Khintchine as:
4
SIGNAL THEORY - version 2012
2.1 PERIODOGRAM
It has been demonstrated that when the limit is taken for T , the PSD
then becomes:
5
SIGNAL THEORY - version 2012
This spectrum depends on the choice of the time window with width T
with respect to the time origin. If the observed signal is stationary and
ergodic, an approximation for the value of q() can be estimated from the
average of different, successive periodograms with width T. Hence, it
becomes possible to gain spectral information using the classical Fourier
integral, which will be implemented for discrete signals with an FFT (Fast
Fourier Transform).
This technique, which forms the basis for the Fourier analyser
measurement instruments, is depicted in Figure 2.1. Using the FFT for each
interval with width T de function XTj(k)() is obtained, and hence also the
densities qTj(k) can be derived. In the example the intervals do not overlap.
However, often 50% overlap is provided. One obtains then:
Since the signal is ergodic (from one single realization, the full set of
statistical properties can be derived), one can state that:
In case that B() will approach its asymptotic value fast (see paragraph
2.2 further), then it becomes possible and interesting to obtain q() via the
Fourier integral, since then only a limited number of lags of B() are required.
6
SIGNAL THEORY - version 2012
7
SIGNAL THEORY - version 2012
If will tend to infinity, R() will asymptotically approach a2. This will be
achieved either in a monotonous or else in an oscillatory way, as is depicted
in the Figure 2.3.
2. Since:
8
SIGNAL THEORY - version 2012
One can derive this result for a wide sense stationary ergodic signal also
directly from the relations of Wiener-Khintchine. Indeed, since:
and, hence:
3. From equation (‡) one derives that R() = R(-), and hence, that
the autocorrelation is an even function.
Remark: To proof this condition, one will derive the initial statistical
moment of the second order; i.e.:
9
SIGNAL THEORY - version 2012
The attained value remains valid from the given time instance up to T/2
further for both cases. The signal is depicted in Figure 2.4.
10
SIGNAL THEORY - version 2012
iii) the signal has no D.C. component (direct current) ; in other words the
ensemble average equals zero, since the realization average and ensemble
average are equal.
And hence, the time zero reference t0 can be chosen arbitrarily; e.g. t0=0.
This will be assumed to be the case further on.
The autocorrelation, therefore, can be evaluated using:
To compute this function, the original signal will be shifted over a time
delay , the product of the two versions will be made and then averaged. This
is illustrated in Figure 2.5. One can immediately conclude from this that the
autocorrelation function is periodical with the same period T.
Figure 2.5: The original square wave signal (t) and the version shifted
with a time delay .
11
SIGNAL THEORY - version 2012
and hence:
with:
Let
12
SIGNAL THEORY - version 2012
13
SIGNAL THEORY - version 2012
One will observe also that the signal is ergodic, so that also:
14
SIGNAL THEORY - version 2012
15
SIGNAL THEORY - version 2012
Since:
And hence:
is given by t. Hence, the probability for no change is simply [1-t]. If (t0) is given, then all the previous
attained values of (t) have no influence on the future value. The future value will via only depend on the
current value at the time instance t=t0. Hence, the tlegraph signal is a Markov process!
16
SIGNAL THEORY - version 2012
Remarks:
i) Note that the signal is wide sense stationary;
ii) Hence, conclude that the signal is ergodic, since the used
parameter is the ensemble average.
Conclusion:
17
SIGNAL THEORY - version 2012
Suppose know that for each t1,t2 the events {(t1)=a, (t2)= a} and
{(t1)=-a, (t2)=-a} have the same probability. This means that for a certain
interval =t1-t2, wherever placed with respect to the signal, the probability to
have no polarity change, or 2 polarity changes, or, in general an even number
of polarity changes, is independent of the attained value (both for +a and for
-a the same probability is assumed).
The same condition is assumed for the events {(t1)=a, (t2)= -a} and
{(t1)=-a, (t2)= a}. This means that for a certain interval =t1-t2, wherever
placed with respect to the signal, the probability to have one single polarity
change, or 3 polarity changes, or, in general an odd number of polarity
changes, is independent of the attained value (both for +a and for -a the same
probability).
One has also noted that P((t1)=+a) = P((t1)=-a) = 0.5, so that the signal is
statistically symmetric with respect to the time axis. Hence P=P and P=P.
The autocorrelation function, therefore, results in:
18
SIGNAL THEORY - version 2012
19
SIGNAL THEORY - version 2012
The power of the signal is given by P = R(0) = a2. This result can be
retrieved also from the definition and the pdf:
20
SIGNAL THEORY - version 2012
One also finds the asymptotic value for large time intervals:
Figure 2.10: The PSD of the telegraph signal, where a=1 and =1.
21
SIGNAL THEORY - version 2012
Binary signals are generated with a bit rate of 1/T bit/s and will take the
voltage values x1 and x2. These voltages are used to encode the characters of a
source with alphabet length equal to 2. This corresponds to a binary source
with symbols {0,1}. Each time the clock period evolves, a polarity change will
occur if the new symbol differs from the previous one, else not.
At the receiving side, a clock with the same period is used to sample the
signal; i.e. to decode the voltages x1,2 into the symbols of the binary source
{0,1}. Both clocks (transmitter-receiver) have the same frequency, but are not
synchronized; i.e. their zero-crossings do not occur at the same time. And
even if both clocks would be synchronized, the incoming time signal will be
shifted from the clock of the receiver due to the delay in the transmission
channel. Hence the moment of sampling will occur somewhat between 0
(perfectly synchronized and no delay) and T. The shift cannot be greater then
T, since this would imply the reception of another symbol from the source. In
Chapter 1, an example has been treated to demonstrate how the phase shift
should be distributed in order to assure that the signal will be wide sense
stationary. The study showed that should be uniformly distributed between
0 and T to make the NRZ-signal wide sense stationary. This condition is
necessary to be able to apply the Wiener-Khintchine theorem further to
compute the PSD. Again, with the PSD, the bandwidth of the NRZ-signal can
22
SIGNAL THEORY - version 2012
be determined.
The symbols that the binary source is producing are assumed to be
statistically independent. Also here the polarity switching in the line coding is
assumed to be executed extremely fast, so that mathematically this can be
modelled as happening in an infinite small amount of time, i.e. in 0 s. Then the
pdf can be expressed as:
The signal has only two different possible values: either x1, else x2.
This implies further that:
In the case that > T, the values (t) and (t+) are independent of each
other, since the source is supposed to produce symbols which are
independent of each other. Since the signal is stationary, the autocorrelation
in this case can be written as:
In the case that T one has two mutual exclusive cases: one will
observe a polarity switch, i.e. (t) (t+), or there is no change in polarity, i.e.
(t) (t+). The autocorrelation for the two cases, therefore, results in:
23
SIGNAL THEORY - version 2012
The event that (t) (t+) will occur when a polarity change takes place
in the interval with duration ; i.e. when < . One obtains then:
With this, also the second probability can be found easily, since:
And hence:
It is possible to note this even more compact when the function tri() is
introduced:
24
SIGNAL THEORY - version 2012
25
SIGNAL THEORY - version 2012
frequency range from D.C. up to the first zero (major bin in Figure 2.14),
which corresponds to 1/T Hz; i.e. in the range:
Hence, as a rule of the thumb one can state that the bandwidth of a NRZ
line encoder yields the same value as the rate of the clock. Hence, 100 Mbit/s
transmission will require about 100 MHz bandwidth.
Where:
26
SIGNAL THEORY - version 2012
If the signal is ergodic, then represents the r.m.s. value of the signal
(root mean square):
In case that ()=0 for 0 and (0)=1, then the noise source is called
white2.
For a Gaussian white noise source the density of second order can then
be expressed as:
2
In order words () = (), and hence B() = 2 (). It can be shown that the condition for a Gaussian
noise source to be ergodic is given by:
27
SIGNAL THEORY - version 2012
(①)
28
SIGNAL THEORY - version 2012
(②)
Indeed, multiplication of both hand sides of equation (①) with n*(t)
and integration, yields directly equation (②).
The Fourier series is a special example.
Suppose further that the ensemble average of the stochastic space for b
equals zero:
3
The base functions in the Fourier series are orthonormal; i.e.
29
SIGNAL THEORY - version 2012
One will now derive the base functions n(t) that ensure orthogonally of
the coefficients bn.
Only in the case that the ensemble average of the coefficients bn is zero,
orthogonality and uncorrelatedness refer to the same property, because then
the inner product in the orthogonality condition and the covariance operator
refer to the same equation.
Suppose that the autocorrelation function of x(t) is given by B(t1,t2), and
that B(t1,t2) is continuous. Suppose further that the ensemble average of x(t) is
zero, so that the ensemble average of the coefficients bn also becomes zero.
One will be able now to proof easily that the orthogonality of the coefficients
bn results into uncorrelatedness of the latter.
Remarks:
i) bn bn(t), but depends on the realisation k: x(t) =xk(t).
ii) the condition that the ensemble average of bn equals zero does not
imply that the signal x(t) would be ergodic.
To ensure that the coefficients bn, given by ②, are orthogonal, the base
functions n(t) must satisfy the integral equation:
for a certain value of =n and the variance of bn should be equal to n.
Proof:
Since:
30
SIGNAL THEORY - version 2012
So that:
(③)
This is because m(t) are unique functions independent of the
realization, so that only the coefficients bn depend on the realization (k).
Hence:
On the other hand, taking the complex conjugate of equation (②), and
substituting, yields:
31
SIGNAL THEORY - version 2012
advantage of the expansion into this type of series, however, is the optimal
convergence. The expansion is important for compression of coded signals.
The base functions are, indeed, independent of the realization and therefore
can be made available to the receiver on beforehand. Instead of sending the
signal through the channel as such, one has only to transmit the coefficients,
so that at the receiving side, the signal can be reconstructed with the series
expansion after some computation. In case the convergence is optimal, only a
few number of coefficient need to be transmitted to establish a truncated
version of the series, of which the error will be small due to fast convergence
features.
Suppose the process x(t) wide sense stationary and ideal baseband low
pass (i.e. an adequate model for white noise filtered by an ideal rectangular
filter). Then the PSD of , S()=S0 for c and else S()=0 is as depicted in
the Figure 2.15.
Figure 2.15: The PSD and the autocorrelation function of the ideal
baseband low-pass signal.
The autocorrelation function cab easily be computed from the PSD using
the theorem of Wiener-Khintchine:
32
SIGNAL THEORY - version 2012
or:
This integral equation has been solved and has solutions n(t,c), where
c=cT. These solutions are known as the prolate spheroidal wave functions.
One can observe that the Eigen values n(c) and the Eigen functions of
n(t,c) of the integral equations are only dependent of c.
Example: c=4: (4) = 0.996; 0.912; 0.519; 0.1 10 for n = 0; 1; 2; 3.
The Eigen functions n(t,c) are depicted in Figure 2.16.
33
SIGNAL THEORY - version 2012
One will now attempt to derive the orthogonal series expansion for (t)
in the interval [-A,+A]. Hereto the integral equation must be solved:
Solving this integral equation will lead to find the set of base functions
that will allow further to derive the Karhunen-Loève expansion for a
stochastic signal that has zero ensemble mean. This part is intended purely
for the reader’s information and is no part of the material for examination.
One will try to solve the integral equation by formulating a linear
differential equation to which f(t) must obey and find general solutions of the
latter. Next, one will enter these solutions into the integral equation to find
the value of . The integral equation can be rewritten first as:
or:
34
SIGNAL THEORY - version 2012
and hence:
or:
In other words: in order that the integral equation would hold for f(t),
the latter must satisfy also to the linear second order differential equation.
Therefore, one will solve the differential equation and feed the integral
equation with a general solution, taking into account all possible values of ;
i.e.4:
Assume first the last possibility; i.e. > 2. Then, one can observe that:
4
0, because of the definition of = 2 , where represents the average number of polarity changes
per second, and hence, is positive and since:
35
SIGNAL THEORY - version 2012
Substituting this general solution into the integral equation, one will
obtain after integrating and grouping of the terms the following expression:
and:
36
SIGNAL THEORY - version 2012
Substituting jb by a, one will return to the previous case. Also here, one
will note that the equations cannot be satisfied in the case that c1 c2. Hence,
one will retain two possibilities: c1 = c2 and c1 = - c2. In case that c1 = c2, one
finds that:
One will note now the solutions that differ from zero of this equation as
bn (one will find an infinite number of solutions bn). Substituting bn by n
relying on the definition of b, then the right hand side of the equation
becomes f(t) and is hereto satisfied. In case that c1 = - c2, one can
demonstrate in a similar manner that the integral equation can be satisfied, if
b agrees with:
Where satisfies to .
One will demonstrate readily that the two other cases for do not yield
solutions neither. In case = 0, then f '(t) = , and if = 2, then f '(t) = 0, so that:
37
SIGNAL THEORY - version 2012
Where:
And:
38
SIGNAL THEORY - version 2012
and where:
39