Random Signals
Random Signals
RANDOM SIGNALS
Signals can be divided into two main categories - deterministic and random. The
term random signal is used primarily to denote signals, which have a random in its
nature source. As an example we can mention the thermal noise, which is created
by the random movement of electrons in an electric conductor. Apart from this,
the term random signal is used also for signals falling into other categories, such
as periodic signals, which have one or several parameters that have appropriate
random behavior. An example is a periodic sinusoidal signal with a random phase
or amplitude. Signals can be treated either as deterministic or random, depending
on the application. Speech, for example, can be considered as a deterministic signal,
if one specific speech waveform is considered. It can also be viewed as a random
process if one considers the ensemble of all possible speech waveforms in order to
design a system that will optimally process speech signals, in general.
The behavior of stochastic signals can be described only in the mean. The
description of such signals is as a rule based on terms and concepts borrowed from
probability theory. Signals are, however, a function of time and such description
becomes quickly difficult to manage and impractical. Only a fraction of the signals,
known as ergodic, can be handled in a relatively simple way. Among those signals
that are excluded are the class of the non-stationary signals, which otherwise play
an essential part in practice.
Working in frequency domain is a powerful technique in signal processing. While
the spectrum is directly related to the deterministic signals, the spectrum of a ran-
dom signal is defined through its correlation function. This is a “natural” conse-
quence of the “uncertainty”, which is characteristic to random signals.
Random signals can be both analog and digital. The values of digital signals are
represented with a finite number of digits. This implies that the stochastic terms
used are different for the two signal categories. In the following, however, we will
assume that the values of digital signals are represented with infinite precision, and
that we can describe the two types of signals in similar way. Often such signals are
called ”discrete time signals” rather than digital signals to emphasize the fact that
the signal values are represented with infinite precision.
The main part of the signals that will be processed will be real. This does not
mean that stochastic signals cannot be complex. Complex random signals can be
analyzed the same way as real random signals with very few changes.
8-1
8-2 Random signals Chapter 8
8.1.1 Definitions
In this section we will briefly review some of the key concepts and terms necessary
to build the theory around stochastic signal processing.
In the study of probability, any process of observation is referred to as an ex-
periment. The results of an observation are called outcomes of the experiment.
If the outcomes of an experiment cannot be predicted, then it is called random
experiment.
The set of possible outcomes of a random experiment is called the sample space.
An element in the sample space is called a sample point. Each outcome of a random
experiment corresponds to a sample point.
Subsets of the sample space are called events, and events consisting of a single
element (sample point) are called elementary events.
Probability is a number, associated with events according to some appropriate
probability law. The probability assigned to the event A from the sample space S
A ∈ S is denoted as P (A) and has a value between 0 and 1:
P (A), 0 6 P (A) 6 1
0 6 WX (ξ) 6 1, (8.1.2)
1 1
0.8 0.8
Wx(ξ)
Wx(ξ)
0.6 0.6
0.4 0.4
0.2 0.2
0 0
−1 0 1 −1 0 1
ξ ξ
Z ξ
d
wX (ξ) = WX (ξ), WX (ξ) = wX (θ)dθ (8.1.5)
dξ −∞
For the case, in which ξ is defined within the subset of the rational numbers,
wX (ξ) consists exclusively of δ-functions, because WX (ξ) has a stair-case shape.
If ξ is defined over the set of rational numbers, wX (ξ) is continuous, except for
those points, where WX (ξ) has a discontinuity point. Figure 8.2 shows two exam-
ples of probability density functions that correspond to the probability distribution
functions shown in Fig. 8.1. For the case, in which ξ ∈ R we have the following
relations:
Z ∞
P {ξ1 < X 6 ξ1 + dξ1 } = wX (ξ1 )dξ1 , and wX (ξ)dξ = 1. (8.1.6)
−∞
0.15
0.01
wx(ξ)
wx(ξ)
0.1
0.005
0.05
0 0
−2 −1 0 1 2 −2 −1 0 1 2
ξ ξ
E{X} is also referred as the first order moment of X. The nth moment of X is
defined as: Z ∞
E{X n } = ξ n wX (ξ)dξ. (8.1.9)
−∞
The n’th central moment is given by:
Z ∞
E{(X − E{X})n } = (ξ − E{X})n wX (ξ)dξ. (8.1.10)
−∞
0.4 0.6
0.5
0.3
0.4
w(ξ)
w(ξ)
0.2 0.3
0.2
0.1
0.1
0 0
−4 −2 0 2 4 0 1 2 3 4
ξ ξ
2 2
χ , s=1, N=1 χ , s=1, N=2 og N=6
0.5
0.6
0.5 0.4
0.4 N=2
0.3
w(ξ)
w(ξ)
0.3
0.2
0.2 N=6
0.1
0.1
0 0
0 1 2 3 4 0 2 4 6 8
ξ ξ
Y = β(X), (8.1.26)
where η = β(ξ) is the desired conversion of variable. The probability density func-
tion wY (η) can be calculated through an appropriate summation of probabilities.
If wX (ξ) consists exclusively of δ-functions, then placement and strength of these
δ-functions can be found easily through the mapping η = β(ξ).
If wX (ξ) does not contain δ-functions, and the function η = β(ξ) is differentiable,
then wY (η) can be found from the expression:
X
P {Y ∈ [η, η + dη]} = P {X ∈ Iq }, (8.1.27)
q
where Iq denotes those intervals, where X can be found, given that Y is in the
desired interval of values (in the case of surjections there may be several values of
ξ which map to a given η). The magnitude of the probability P {X ∈ Iq } for one of
Section 8.1. Random variables 8-7
dβ(ξ)
β 0 (ξ) = . (8.1.29)
dξ
If β 0 (ξ) is a zero only for a finite number of values of ξ, then the inverse function
ξ = γ(η) of η = β(ξ) exists in these intervals, and we get:
X wX (ξ) X wX (γ(η))
wY (η) = = . (8.1.30)
q
|β 0 (ξ)| q
|β 0 (γ(η))|
Here, g(t) is the input signal to the amplifier, y(t) is the corresponding output
signal, and α is a positive real constant. The transfer characteristic is shown below:
8-8 Random signals Chapter 8
y(t)
g(t)
−α α
A noise source x(t) is connected to the input of the amplifier. The probability
density function of x(t) is defined as:
(
β for |ξ| 6 b
wx (ξ) = (8.1.33)
0 otherwise
Find β and the probability density function of the output signal y(t) expressed via
b, α, and k, when b 6 α and b > alpha. Calculate the power of the output signal
for both cases.
The probability density functions:
Rb 1
Since −b wx (ξ)dξ = 1, we get β = 2b .
For the output signal when b > α we get:
η
η
α/b
k(b-α)
wy(η) ξ
-k(b-α)
1
2bk
1
2b
−α α ξ
-b b
wy (η) consists of a delta function and rectangular density. The amplitude of the
delta function is:
1 α
2α =
2b b
The height of rectangular density function is found as:
β(ξ) = k(ξ − α)
β 0 (ξ) = k
X wx (ξ) 1
2b 1
wy (η) = 0 (ξ)|
= =
q
|β |k| 2bk
When b < α the output is always 0, and the probability density function is just
a delta function with amplitude of 1. The power of the noise is 0. When b > α the
power is:
Z +k(b−α)
2 1 α
Py = η + δ(η) dη
−k(b−α) 2bk b
1 1
= ((k(b − α))3 + (k(b − α))3 )
2bk 3
k 3 (b − a)3 k 2 (b − a)3
= =
3bk 3b
Section 8.2. Random Process 8-9
..................................................................................
8.2.1 Definitions
The word process (lat. movement) denotes a sequence of changes of the properties
of a system or object. These may appear deterministic or stochastic in nature to
the observer. Just as a random variable may be thought of as mapping from the
sample space of a given experiment into the set of real or complex numbers, a
random process represents mapping of the sample space into a set of signals. Thus
a random processes is a collection or ensemble of signals. In the following we will
use the capital letters to denote random processes:
for analog and digital signals, respectively. Another term for a random process is
stochastic process and in the rest of the chapter these will be used interchangeably.
The process must be considered at a fixed time instance, to apply the mathe-
matical apparatus used to treat random variables. In this way we have the values of
the signals for a given time instance, which random values are tied to the outcomes
of the experiment. In other words, the values of the signal at the time instance
t = t0 represent stochastic variables, which can be characterized by the appropriate
probability density functions. The simplest of these are one-dimensional of the type:
The expected value or the mean of a random process is also a function of time
and can be obtained by:
Z ∞
E{X(t)} = ξwX (ξ; t)dξ = µ1 (t) (8.2.3)
−∞
20
0
−20
20
0
−20
20
0
−20
20
0
−20
Figure 8.5. Examples of different realizations of analog and digital random signals.
The expected value E{X(t1 )X(t2 )} plays an essential part in the characteriza-
tion of the process. It is known as the autocorrelation of the signal and is calculated
as:
Z ∞Z ∞
RX (t1 , t2 ) = E{X(t1 )X(t2 )} = ξ1 ξ2 wX (ξ1 , ξ2 ; t1 , t2 )dξ1 dξ2 (8.2.5)
−∞ −∞
Section 8.2. Random Process 8-11
As said, a stationary process in the strict sense will have qth order density function
independent of time, and the qth-order moment will also be independent of time:
Z ∞
q
E{X (t)} = ξ q wX (ξ)dξ, q integer. (8.2.8)
−∞
will depend only on the difference between t1 and t2 . This difference is often denoted
by τ , and the autocorrelation of the stationary process becomes:
Z ∞ Z ∞
RX (τ ) = ξ1 ξ2 wX (ξ1 , ξ2 ; |τ |)dξ1 dξ2
−∞ −∞
Z ∞ Z ∞ (8.2.10)
RX (k) = ξ1 ξ2 wX (ξ1 , ξ2 ; |k|)dξ1 dξ2 ,
−∞ −∞
The fact that a stationary random process process is ergodic means that
the mean values in time of any of the signals generated by the process are
equal to the mean values calculated over the ensemble of realizations. For
example:
E{X(t)} = hx(t)i
(8.2.13)
E{X(n)X(n + k)} = hx(n)x(n + k)i
which means that there is no DC component of the current flowing through the
resistors. If we calculate the power of the resulting signal we will find out that:
hx2p (t)i =
6 hx2q (t)i, (8.2.16)
where xp (t) and xq (t) are two randomly chosen signals from the constructed process.
They are different, because the actual resistance of the resistors varies from resistor
to resistor.
In other words, it is not possible to characterize the process based on the study
of only one of the signals assigned to the outcomes of the experiment.
Now, if the process is modified such as the resistors in the experiment are fully
identical, and are kept under the same conditions, then the process becomes ergodic.
Under these conditions we will have:
hx2p (t)i = hx2q (t)i, for all p and q. (8.2.17)
For the sake of clarity we must emphasize that this is not the same as
xp (t) ≡ xq (t) for p 6= q, (8.2.18)
because the signals emanate from two different resistors.
..................................................................................
(ξ − m)2
1
wX (ξ) = √ · exp − , −∞ < ξ < ∞ (8.2.19)
2π · s2 2s2
(see also table 8.1). The constants m and s are given by:
m = E{X(t)}
(8.2.20)
s2 = E{(X(t) − E{X(t)})2 } = E{X 2 (t)} − E 2 {X(t)}.
If the mean value of the process is 0, E{X(t)} = 0, then
s2 = E{X 2 (t)} = R(0), (8.2.21)
where R(τ ) is the autocorrelation function of the process.
It can be shown that the higher-order even moments of the process are given by:
(2q)! 2q
E{x2q (t)} = s , (8.2.22)
2q q!
while all moments of odd order are equal to zero.
The two-dimensional probability density function of a stationary normal process
with mean value of 0 is:
R(0)ξ12 + R(0)ξ22 − 2R(τ )ξ1 ξ2
1
wX (ξ1 , ξ2 ; τ ) = · exp − .
2(R2 (0) − R2 (τ ))
p
2π R2 (0) − R2 (τ )
(8.2.23)
1 Notice that strictly speaking, a digital signal cannot be normally distributed.
8-14 Random signals Chapter 8
From this equation it can be seen that if R(τ ) = 0 either for discrete values of τ or
in intervals of values, then we have:
In other words, the values of the random signal which are τ seconds apart are
independent under these conditions.
It is possible to show that all multidimensional probability density functions are
functions of R(τ ) (and m, if m 6= 0).
They express the average dependence (relation) between the values of the signal
that are τ seconds or k samples apart.
Closely related to the autocorrelation functions is the autocovariance:
Analogous to the relations that exist for random variables, the variance of a
discrete-time signal x(n) is defined as:
The variance of an analog signal is defined in a similar way. Using the fact that
Z ∞
(ξ − E{X(n)})2 wX (ξ)dξ = E{X 2 (n)} − E 2 {X(n)} (8.3.5)
−∞
Consider the random signal Y (t) constructed from an arbitrary stationary ran-
dom signal X(t) as:
Y (t) = X(t) ± X(t + τ ) (8.3.8)
The autocorrelation function of Y (t) at lag zero is
This limit value is in other words the DC-power of the signal2 . Hence the AC-power
of the random process must be given by:
since the total power P is given by the value of the autocorrelation function at time
lag 0.
As mentioned previously, the time-average values can be used instead of the
expected values. In other words, the power of a signal is
RX (τ ) is an even function, hence the SX (f ) is real and even function of the frequency
f . Using the fact that the power P is P = RX (0) and that e−jπf 0 ≡ 1, we have
Z ∞
P = RX (0) = SX (f )df (8.3.16)
−∞
2 You can imagine that the signal is either voltage applied on or current flowing through a 1Ω
1
where fs = ∆T is the sampling frequency.
As it can be seen, the spectral characteristics of random signals are characterized
by using a deterministic signal with finite energy - RX (τ ) or RX (k).
The cross correlation function expresses the level of connection (coherence) between
two signals X(t) and Y (t).
The corresponding expression for discrete-time signals is:
are usually not even functions of τ or k. Hence, the cross correlation functions are
not even too:
where a is a real constant and X(t) and Y (t) are analog random signals. This
equation is a square equation of a. It is a convex function of a. In order to be valid
for all values of a, it must not have any solutions.Therefore, the following condition
must be fulfilled:
(2RXY (τ ))2 − 4RY (0)RX (0) < 0, (8.3.25)
hence
2
RXY < RX (0)RY (0) (8.3.26)
In the special case when a = 1 or a = −1 we get
and
2|RXY (k)| < RX (0) + RY (0) (8.3.29)
Two random signals X(n) and Y (n) are said to be uncorrelated for a given time
lag k = k1 if
Notice that if only one of the signals has a zero DC component, then the two terms
uncorrelated and orthogonal become identical.
While the above conditions impose some restrictions on the expected values of
the signals, two signals are said to be statistically independent if their multidimen-
sional distributions (and pdf) can be written as a product of the distributions (and
pdf):
wXY (ξ, η; k) = wX (ξ) · wY (η). (8.3.33)
Consequently, statistically independent signals are also uncorrelated. The converse
is valid only for signals, whose two-dimensional probability density functions have
normal distribution.
The above is valid for analog signals too. If the signals are ergodic, then the
expected values can be substituted by time averages.
and
∗
SY X (f ) = SXY (f ). (8.3.36)
The cross-spectrum carries information about within which parts of the spectrum
the two signals has some resemblance. In other words, the values of the cross-
spectrum in that frequency range must be 6= 0.
PY = a2 PX , (8.4.3)
RY (τ ) = E{aX(t)aX(t + τ )} = a2 RX (τ ) (8.4.4)
it follows that
SY (f ) = a2 SX (f ). (8.4.5)
It must be noted that if X(t) is stationary then Y (t) is stationary too.
Some applications use a slightly different form of signal combination:
PU = a2 PX + b2 . (8.4.8)
RU (τ ) = a2 RX (τ ) + b2 . (8.4.9)
SU (f ) = a2 SX (f ) + b2 δ(f ) (8.4.10)
which implies that we must know the autocorrelation functions of X(t) and Y (t),
and their cross-correlation to find the autocorrelation of U (t).
Section 8.5. Correlation function for deterministic signals with finite energy 8-19
Notice that the integral is not normalized as it in the case of ergodic random signals.
An understanding of how the autocorrelation function is created can be obtained
from the following equation:
Z ∞
Rh (τ ) = h(−τ ) ∗ h(τ ) = h(θ)h(θ + τ )dθ (8.5.2)
−∞
It can be seen that the autocorrelation can be found by convolving the signal with
a time-reversed copy of itself. This is illustrated in Fig. 8.6
From the autocorrelation function one can find the energy density spectrum:
Rh (τ ) = h(−τ ) ∗ h(τ )↔H ∗ (f )H(f ) = |H(f )|2 (8.5.3)
The corresponding expressions for a finite-energy digital signal are:
∞
X
Rh (k) = h(n)h(n + k) (8.5.4)
n=−∞
and
∆T
Rh (k) = h(−k) ∗ h(k) = ↔ H ∗ (f )H(f ) = |H(f )|2 . (8.5.5)
8-20 Random signals Chapter 8
1
h(t) 0.5
−0.5
−1
−0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 0.05
0.5
h(−t)
−0.5
−1
−0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 0.05
t
10
R (τ)
0
h
−10
−0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 0.05
τ
Rg (τ ) is again a periodic function with the same period T as the signal g(t). This
can easily be verified is τ is replaced by τ + T in (8.6.1).
The autocorrelation function is also an even function of time τ , i.e. Rg (τ ) =
Rg (−τ ). This property can also be verified by substitution of variables. In this case
one uses the fact that the integration must not necessarily start at − T2 , but must
only include an integer number of periods of g(t)3
From the expression
Z T2
1
Rg (0) = g 2 (t)dt, (8.6.2)
T − T2
g(t) 0.5
−0.5
−0.025 −0.02 −0.015 −0.01 −0.005 0 0.005 0.01 0.015 0.02 0.025
0.5
g(t+τ1)
−0.5
−0.025 −0.02 −0.015 −0.01 −0.005 0 0.005 0.01 0.015 0.02 0.025
t
20
Rg(τ)
−20
−0.025 −0.02 −0.015 −0.01 −0.005 0 0.005 0.01 0.015 0.02 0.025
τ
Hence,
Sg (m) = G∗ (m)G(m) = |G(m)|2 , (8.6.7)
T
where g(m)↔G(m). The total power can be found by
∞
X
P = Sg (m) (8.6.8)
m=−∞
8-22 Random signals Chapter 8
Since Rg (τ ) is an even function of τ , then the power spectrum Sg (m) will be a real
and even function of m. Notice that Sg (m) is independent of the phase-spectrum
of g(t), which implies that different signals can have the same power spectrum and
hereby the same autocorrelation function. It can also be seen that Sg (m) > 0. The
power spectrum of a periodic digital signal is given as
N
Rg (k)↔Sg (m), (8.6.9)
and as in the case of periodic analog signals the following relations will be valid:
N
X −1
Sg (m) = |G(m)|2 , P = Sg (m), and Sg (m) > 0. (8.6.10)
m=0
It can easily be seen that Rgh (k) = Rhg (−k). Notice that although the given
expression contain integration over a single period, the following relations are also
valid: Z
1
lim g(t)h(t + τ )dt = Rgh (τ ) (8.6.14)
T1 →∞ T1 T
1
and
1 X
lim g(n)h(n + k) = Rgh (k) (8.6.15)
N0 →∞ N0
N0
it follows that:
Sgh (m) = H(m)G∗ (m) = H(m)G(−m), (8.6.17)
Section 8.6. Correlation function and power spectrum of periodic signals 8-23
T N
where g(t)↔G(m) and h(t)↔H(f ).
The same relations can be proven for digital signals too:
N
Rgh (k)↔Sgh (m) = H(m)G∗ (m), (8.6.18)
N N
where g(n)↔G(m) and h(n)↔H(m). The cross power spectrum gives information
about those frequencies where both signals have spectral components. It can be seen
that there exist many signal pairs which have the same cross-correlation function,
hence cross power spectrum.
and
n
+ θ1 ),
g(n) = a1 cos(2π (8.6.20)
N
respectively. In the expressions above a1 and θ1 are constants, and the periods are
T = 1/f0 and N . These signals have spectral components only at m = ±1 (only
first harmonics). In both cases
a21
Sg (±1) = |G(±1)|2 = (8.6.21)
4
It follows that the autocorrelation functions for the two signals are:
1 2 1 2 k
Rg (τ ) = a cos 2πf0 τ and Rg (k) = a1 cos 2π (8.6.22)
2 1 2 N
Notice that the phase angle of the signals θ1 is missing in the expressions of the
autocorrelation functions. This means that all pure tones with the same frequency
and strength have the same autocorrelation function.
The cross-correlation function of two tones with the same frequency can be
found again using simple consideration in frequency domain. Let g(t) and h(n) be
defined by the expressions above and let h(t) and h(n) be h(t) = a2 cos(2πf0 + θ2 )
and h(n) = a2 cos(2πn/N + θ2 ). In other words, the differences lie in the different
amplitude and phase. The cross-power spectrum is:
a1 a2 ±j(θ2 −θ1 )
Sgh (±1) = e , (8.6.23)
4
which implies that:
a1 a2
Rgh (τ ) = cos(2πf0 t + (θ2 − θ1 )) (8.6.24)
2
and
a1 a2 n
Rgh (k) = cos(2π + (θ2 − θ1 )). (8.6.25)
2 N
If the two pure tones do not have the same period, then Rgh (τ ) ≡ 0 and Rgh (k) ≡
0.
periodic signal consisting of the same spectral components and having the same
power will have values that are larger.
Such a signal (g(t) or g(n)) is further characterized by the fact that the au-
tocorrelation function is proportional to the signal itself, i.e. it has the same
T
shape as the signal. This is a direct sequence of the fact that Rg (τ )↔|G(m)|2
N
and Rg (m)↔|G(m)|2 .
If the analog signal h(t) has spectral components at the same frequencies as
g(t), it follows that the cross-correlation function Rgh (τ ) will have the same shape
as h(t). It can be shown directly from the relation:
where kx is the constant and real value of the spectral components of g(t).
where the constant N0 is N0 < 21 N . For positive values of k one gets the sum
−1−k
N0X
1 N0 − k 2
a2 = a (8.6.31)
N n=0
N
and hereby
2 N0 |k|
Rg (k) = a 1− , |k| < N0 (8.6.32)
N N0
These results can be also obtained by first finding the power spectrum of g(t) or
g(n) and then using the inverse Fourier transform to find the correlation function.
Notice that the above expressions are for signals with a duty cycle below 50 %
( Tθ < 12 and NN0 < 21 ). Signals whose duty cycle exceeds 50 % can be treated as the
sum of a constant DC signal and a rectangular signal with a duty cycle < 50%, and
negative amplitude a.
1
g(t)
0.5
0
−0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2
1
g(t+τ1)
0.5
0
−0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2
t
0.2
Rg(τ)
0.1
0
−0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2
τ
T
Let the spectrum of g(t) be g(t)↔G(m) = GR (m) + jGI (m), and let the pure
tone be
m0
cos 2π t
T
The cross correlation is
Z T
1 2 m0
Rgc (τ ) = g(t) cos(2π (t + τ ))dt (8.6.33)
T − T2 T
Hence,
T
Rgc (0) = GR (m0 ) and Rgc = GI (m0 ) (8.6.35)
4m0
From a stochastic point of view, this is equivalent with, that for any outcome of the
given experiment xq (t) or xq (n), a new signal signal yq (t) or yq (n) is created by the
convolution:
yq (t) = h(t) ∗ xq (t) or yq (n) = h(n) ∗ xq (n) . (8.7.2)
In this case xq (t) or xq (n) are input signals to the filter. The output of the filter
is again a random process Y (t) or Y (n), and it is the properties of this random
process, which we will try to find using the properties of the random process X(t)
and X(n) and the characteristics of the filter.
It can be shown that if the input signals are strictly stationary, then the output
of the filter is stationary too. Conversely, if the input is stationary of kth order,
then the output is not necessarily stationary with the same order.
There exists no common calculation procedure to find the probability density
function of the output of the filter. It is possible to show, though5 , that if the
“duration” of the impulse response of a filter is long compared to “duration” of the
autocorrelation function of the input signal, then the probability density function
of the resulting signal will approach a normal (Gaussian) distribution. Here the
loose term “duration” can be the RMS duration for example.
The mean of Y (t) can be directly derived as:
Z ∞ Z ∞
E{Y (t)} = h(Θ)E{X(t−Θ)}dΘ = E{X(t)}· h(Θ)dΘ = E{X(t)}·H(0) ,
−∞ −∞
(8.7.3)
In other words, the mean of Y (t) is equal to the mean of X(t) multiplied by the DC
amplification of the filter. The corresponding expression for the digital signals is:
∞
X
E{Y (n)} = h(q)E{X(n − q)} = E{X(n)}H(0) . (8.7.4)
q=−∞
The autocorrelation function for the output of the filter for the analog case can
be found in the following manner.
From
Y (t) = h(t) ∗ X(t) (8.7.5)
5 See e.g. R.A. Altes: Wide Band Systems and Gaussianity, IEEE Transactions on Information
and
Y (t + τ ) = h(t) ∗ X(t + τ ) , (8.7.6)
one finds that
Z ∞ Z ∞
Y (t)Y (t + τ ) = h(Θ)X(t − Θ) · h(σ) · X(t + τ − σ)dΘdσ , (8.7.7)
−∞ −∞
and hereby
Z ∞ Z ∞
RY (τ ) = E{Y (t)Y (t + τ )} = h(Θ)h(σ)RX (τ − σ + Θ)dΘdσ . (8.7.8)
−∞ −∞
As mentioned in section 8.5, the inner of the two integrals is the autocorrelation
function Rh (γ) of the given filter. This means that
RY (τ ) = Rh (τ ) ∗ RX (τ ) . (8.7.10)
Since
Rh (τ ) = h(−τ ) ∗ h(τ ) ↔ H ∗ (f )H(f ) , (8.7.11)
then in frequency domain we get
SY (f ) =| H(f ) |2 SX (f ) . (8.7.12)
These considerations can be carried out also for digital signals and similar results
will be obtained.
Cross-correlation function RXY (τ ) between the input and the output signals of
the filter can be calculated as
Z ∞
RXY (τ ) = E{X(t)Y (t + τ )} = E{X(t) h(Θ)X(t + τ − Θ)dΘ}
−∞
Z ∞
= h(Θ)RX (τ − Θ)dΘ = h(τ ) ∗ RX (τ ) . (8.7.13)
−∞
In practice, this relation between RX (τ ), RXY (τ ) and h(τ ) is used to find h(τ ) and
hereby H(f ) for filters.
In frequency domain, this result is equivalent to
2
Spektrum Ækvivalent støjbåndbredde for |H(f)|
6 f b
k 30 e
5
25
4
20
|H(f)|2
|H(f)|
3
15
2 10
1 5
0 0
0 100 200 300 400 500 0 100 200 300 400 500
Frekvens [Hz] Frekvens [Hz]
Figure 8.9. Definition of the equivalent noise bandwidth for a filter. Here fk is
chosen as the mean frequency in the spectrum.
Since PY > 0 for any filter, this implies that for any random signal the power
spectrum can never get negative values, i.e.
S(f ) > 0 . (8.7.17)
In connection with filters, an often used measure is the equivalent noise bandwidth
Be , which is defined as
Z ∞ Z fs /2
| H(f ) |2 | H(f ) |2
Be = 2
df and B e = df (8.7.18)
0 | H(fk ) | 0 | H(fk ) |2
for analog and digital signals, respectively, where fk is one or another characteristic
frequency of the filter, e.g. f = 0, the center frequency or the frequency, where
| H(f ) | has a maximum. Although it is not directly evident from the expression,
Be is a function of both | H(fk ) | and fk .
The equivalent noise bandwidth for a filter is based on the definition of the
equivalent noise bandwidth for a signal x(t). Be is the width of a fictitious rectan-
gular filter (in frequency domain) filtering noise with uniform power density equal
to Sx (fk ) W. Be is chosen such as the power at the output of the rectangular filter
is equal to the power of the signal x(t). The power of x(t) is
Z ∞
Px = Sx (f )df (8.7.19)
0
where A is constant.
The autocorrelation of this noise for analog signals is
RY (τ ) = A · δ(τ ) . (8.7.23)
The power of such a signal is infinite, implying that such a signal cannot be created
physically. This signal has, however, a number of theoretical uses.
The autocorrelation function of digital white noise is
and the power of this signal is PY = A. Such a signal can be realized and is often
used in digital signal processing.
Band limited white noise is produced by filtering white noise with a filter with
an appropriate transfer function H(f ). The power density spectrum for this signal
is
SX (f ) =| H(f ) |2 A . (8.7.25)
The autocorrelation function for this is signal is then
RX (τ ) = ARh (τ ) . (8.7.26)
we get that
PX = A | H(fk ) |2 Be = 2Be SX (fk ) , (8.7.28)
where Be is the equivalent noise bandwidth found for the characteristic frequency
fk (see sec. 8.7.1).
From this it appears that Be can be considered as the bandwidth - for positive
frequencies - of the bandlimited white noise signal with the power density SX (fk ),
which has the same power as the original signal.
The derivation for digital signals is similar and the result is
Z fg
1 Be
PX = SX (f )df = SX (fk ) , (8.7.29)
2fg −fg fg
where A is a constant. It obvious that the power PX in this case is also infinitely
large. That’s why the pink noise signals used in practice are bandlimited both at
low and at high frequencies. The spectrum of band-limited pink noise is given by
A
for fn <| f |< fø
SX (f ) = |f | (8.7.31)
0 ellers
Since t is in the argument of the trigonometric factors, it is not obvious that V (t)
is stationary, even in the case that X(t) and Y (t) are.
The condition for V (t) to be stationary in wide sense,is that the expected value
of V (t) and its autocorrelation function are independent of the zero-point of the
time (see sec. 8.2.2). Since
E{V (t)} = E{X(t)} cos 2πf0 t + E{Y (t)} sin 2πf0 t , (8.7.34)
then V (t) will be wide sense stationary signal and that the following equation is
fulfilled:
RV (τ ) = RX (τ ) cos 2πf0 τ − RY X (τ ) sin 2πf0 τ . (8.7.38)
From here we can deduce that the spectra of X(t) and Y (t) will be concentrated in
a region around f = 0 and bandwidth of size on the order of 21 B.
If we now create two new random signals
p
2 2
Y (t)
A(t) = X (t) + Y (t) ψ(t) = arctan − (8.7.39)
X(t)
from the signals X(t) and Y (t), it can be seen that V (t) can be written as
Also the signals A(t) and ψ(t) will roughly be signals with spectra concentrated
around f = 0.
If the signals X(t) and Y (t) are mutually independent and Gaussian, the one-
dimensional probability density functions of both signals A(t) and ψ(t) will be
( α α2
e− 2σ2 α > 0
wA (α) = σ 2 (8.7.41)
0 α60
Section 8.7. Linear and non-linear processing of random signals 8-31
σ 2 = RV (0) = PV , (8.7.42)
and6
1
(
0 6 β 6 2π
wψ (β) = 2π . (8.7.43)
0 otherwise
1 1
E{A(t)} = ( πσ 2 ) 2 (8.7.44)
2
and
1
σ 2 {A(t)} = (2 − π)σ 2 , (8.7.45)
2
and also
RV (τ ) = RX (τ ) cos 2πf0 τ . (8.7.46)
where f1 = (2πRC)−1 .
The autocorrelation function of SY (f ) is found by applying the inverse Fourier
transform on SY (f ). It is
A − |τ |
RY (τ ) = e RC (8.7.49)
2RC
(see Fig. 8.10). Then, the total power of the signal is
A
PY = . (8.7.50)
2RC
From the shape of the impulse response h(t) of the integrator, shown in Fig. 8.11,
it follows that
(1− | τ | /T )/T | τ |< T
Rh (τ ) = (8.7.53)
0 | τ |> T
8-32 Random signals Chapter 8
Effekttæthedsspektret Autokorrelationsfunktionen
0 6
f1 RC
5
−10
RY(τ)/(A/(2 RC))
4
SY(f)
−20 3
2
−30
1
−40 0
−1 0 1 2 −0.2 −0.1 0 0.1 0.2
10 10 10 10
Frekvens [Hz] τ
Figure 8.10. The power density spectrum and the autocorrelation function of a
RC-filtered white noise.
1
1
0.8
h(t) ⋅ T
0.6
0.4 0.8
0.2
0
0 0.5 1 1.5 2 0.6
2
t/T
|H(f)|
Autokorrelationsfunktionen
1
0.4
0.8
Rh(τ) ⋅ T
0.6
0.2
0.4
0.2
0 0
−2 −1 0 1 2 −3 −2 −1 0 1 2 3
τ/T f⋅T
according to which
RY (τ ) = Rh (τ ) ∗ RX (τ ) . (8.7.54)
If X(t) is white noise signal with a constant power density A, then RY (τ ) and
SY (f ) become
RY (τ ) = ARh (τ ) (8.7.55)
6 It can be shown, that A(t) and ψ(t) are under these conditions statistically independent
Section 8.7. Linear and non-linear processing of random signals 8-33
and
sin2 πf T
SY (f ) = A , (8.7.56)
(πf T )2
respectively. The power of the signal Y (t) is
PY = A/T . (8.7.57)
If we consider a digital signal X(n), which is filtered with a digital integrator, then
the output signal Y (n) is given by
n
1 X
Y (n) = X(q) . (8.7.58)
N
q=n−(N −1)
PY = A/N . (8.7.61)
y = ax2 , (8.7.62)
where a is a constant.
Let X(t) be the input signal to the non-linear system, and Y (t) its corresponding
output. The mean value of Y (t) is
In other words, Y (t) has a DC component, whose magnitude gives the total power
of X(t).
8-34 Random signals Chapter 8
then ( √ √
wX (− η/a)+wX ( η/a)
wY (η) =
√
2 aη η>0 (8.7.65)
0 n<0
hence
− 2R η (0)
(
√ 1 √1
· η ·e X η>0
wY (η) = 2πRX (0) . (8.7.67)
0 n<0
RY (τ ) = E{Y (t)Y (t + τ )}
Z ∞ Z ∞
= E{X 2 (t)X 2 (t + τ )} = ξ12 ξ22 wX (ξ1 , ξ2 ; τ )dξ1 dξ2 (, 8.7.68)
−∞ −∞
From here it can be seen, that Y (t) has the aforementioned DC-component and the
spectrum of Y (t) is
2
SY (f ) = RX (0)δ(f ) + 2SX (f ) ∗ SX (f ) . (8.7.70)
Figure 8.13 shows an example of the spectrum of Y (t), when X(t) is a bandlimited
white noise (ideal filter).
It must be noted that the continuous portion of SY (f ) will always be 6= 0 in the
vicinity of f = 0, independent of the shape of SX (f ).
When the total power of a signal is measured (or its RMS value), the AC-power
of the signal Y (t) must also be known. This AC power is
2
PAC = 2RX (0) , (8.7.71)
0.8
Sx(f)/k
0.6
0.4
0.2
0
−3 −2 −1 0 1 2 3
f/f0
R2X(0) δ(f)
1.2
R2X(0)/f0
1
0.8
Sy(f)
0.6
0.4
0.2
0
−3 −2 −1 0 1 2 3
f/f0
Figure 8.13. Power density spectrum of band-limited white noise, and of squared
band-limited white noise.
Hence,
∞
Z r
2
E{Y (t)} = η · wY (η)dη = · RX (0) . (8.7.75)
0 π
√
This mean value (DC-value) is proportional to PX .
It can be shown that under these assumptions, the autocorrelation function of
Y (t) is
2 4
2 1 RX (τ ) 1 RX (τ )
RY (τ ) = RX (0) +
π
+
π RX (0) 12π RX 3 (0) + · · · . (8.7.76)
The power density spectrum of Y (t) is found by applying the Fourier transform on
the autocorrelation function RY (τ ). This spectrum too has a DC-component and
an AC-component in the vicinity of the the DC-component (near f = 0).
8-36 Random signals Chapter 8