0% found this document useful (0 votes)
38 views62 pages

7 Probability Communication

The document provides an introduction to probability and random variables. It defines key concepts like sample space, events, probability measures and properties. It also covers random variables, cumulative distribution functions, probability density functions and examples like the Bernoulli random variable.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views62 pages

7 Probability Communication

The document provides an introduction to probability and random variables. It defines key concepts like sample space, events, probability measures and properties. It also covers random variables, cumulative distribution functions, probability density functions and examples like the Bernoulli random variable.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Outline

1 Introduction

2 Probability and Random Variables

3 Random Processes

Introduction 2
Introduction

The main objective of a communication system is the transfer of


information over a channel.
Message signal is best modeled by a random signal
Two types of imperfections in a communication channel:
I Deterministic imperfection, such as linear and nonlinear distortions,
inter-symbol interference, etc.
I Nondeterministic imperfection, such as addition of noise,
interference, multipath fading, etc.

We are concerned with the methods used to describe and


characterize a random signal, generally referred to as a random
process (also commonly called stochastic process).
In essence, a random process is a random variable evolving in time.
Introduction 3
Outline

1 Introduction

2 Probability and Random Variables

3 Random Processes

Probability and Random Variables 4


Sample Space and Probability

Random experiment: its outcome, for some reason, cannot be


predicted with certainty.

Examples: throwing a die, flipping a coin and drawing a card from a


deck.

Sample space: the set of all possible outcomes, denoted by Ω.


Outcomes are denoted by ω’s and each ω lies in Ω, i.e., ω ∈ Ω.

A sample space can be discrete or continuous.

Events are subsets of the sample space for which measures of their
occurrences, called probabilities, can be defined or determined.

Probability and Random Variables 5


Example of Throwing a Fair Die

Various events can be defined: “the outcome is even number of dots”,


“the outcome is smaller than 4 dots”, “the outcome is more than 3
dots”, etc.

Probability and Random Variables 6


Three Axioms of Probability

For a discrete sample space Ω, define a probability measure P on Ω as a


set function that assigns nonnegative values to all events, denoted by E,
in Ω such that the following conditions are satisfied
Axiom 1: 0 ≤ P (E) ≤ 1 for all E ∈ Ω (on a % scale probability
ranges from 0 to 100%. Despite popular sports lore, it is impossible
to give more than 100%).
Axiom 2: P (Ω) = 1 (when an experiment is conducted there has to
be an outcome).
Axiom
S∞ 3: For mutually
P∞ exclusive events1 E1 , E2 , E3 ,. . . we have
P ( i=1 Ei ) = i=1 P (Ei ).

1
The events E1 , E2 , E3 ,. . . are mutually exclusive if Ei ∩ Ej = for all i 6= j,
where is the null set.
Probability and Random Variables 7
Important Properties of the Probability Measure

1. P (E c ) = 1 − P (E), where E c denotes the complement of E. This


property implies that P (E c ) + P (E) = 1, i.e., something has to
happen.
2. P ( ) = 0 (again, something has to happen).
3. P (E1 ∪ E2 ) = P (E1 ) + P (E2 ) − P (E1 ∩ E2 ). Note that if two
events E1 and E2 are mutually exclusive then
P (E1 ∪ E2 ) = P (E1 ) + P (E2 ), otherwise the nonzero common
probability P (E1 ∩ E2 ) needs to be subtracted off.
4. If E1 ⊆ E2 then P (E1 ) ≤ P (E2 ). This says that if event E1 is
contained in E2 then occurrence of E1 means E2 has occurred but
the converse is not true.

Probability and Random Variables 8


Conditional Probability

We observe or are told that event E1 has occurred but are actually
interested in event E2 : Knowledge that of E1 has occurred changes
the probability of E2 occurring.
If it was P (E2 ) before, it now becomes P (E2 |E1 ), the probability of
E2 occurring given that event E1 has occurred.
This conditional probability is given by
(
P (E2 ∩E1 )
P (E2 |E1 ) = P (E1 ) , if P (E1 ) 6= 0 .
0, otherwise

If P (E2 |E1 ) = P (E2 ), or P (E2 ∩ E1 ) = P (E1 )P (E2 ), then E1 and


E2 are said to be statistically independent.
Bayes’ rule
P (E1 |E2 )P (E2 )
P (E2 |E1 ) = ,
P (E1 )
Probability and Random Variables 9
Total Probability Theorem

The events {Ei }ni=1 partition the sample space Ω if:


n
[
(i) Ei = Ω (1)
i=1
(ii) Ei ∩ Ej = for all 1 ≤ i, j ≤ n and i 6= j (2)
If for an event A we have the conditional probabilities
{P (A|Ei )}ni=1 , P (A) can be obtained as
n
X
P (A) = P (Ei )P (A|Ei ).
i=1

Bayes’ rule:
P (A|Ei )P (Ei ) P (A|Ei )P (Ei )
P (Ei |A) = = Pn .
P (A) j=1 P (A|Ej )P (Ej )

Probability and Random Variables 10


Random Variables

ω1 ω2

ω4
ω3

R
x(ω4 ) x(ω1 ) x(ω3 ) x(ω2 )

A random variable is a mapping from the sample space Ω to the set


of real numbers.
We shall denote random variables by boldface, i.e., x, y, etc., while
individual or specific values of the mapping x are denoted by x(ω).
Probability and Random Variables 11
Random Variable in the Example of Throwing a Fair Die

R
1 2 3 4 5 6

There could be many other random variables defined to describe the


outcome of this random experiment!

Probability and Random Variables 12


Cumulative Distribution Function (cdf)

cdf gives a complete description of the random variable. It is


defined as:
Fx (x) = P (ω ∈ Ω : x(ω) ≤ x) = P (x ≤ x).

The cdf has the following properties:


1. 0 ≤ Fx (x) ≤ 1
2. Fx (x) is nondecreasing: Fx (x1 ) ≤ Fx (x2 ) if x1 ≤ x2
3. Fx (−∞) = 0 and Fx (+∞) = 1
4. P (a < x ≤ b) = Fx (b) − Fx (a).

Probability and Random Variables 13


Typical Plots of cdf I

A random variable can be discrete, continuous or mixed.


Fx ( x )

1.0
(a)

x
−∞ 0 ∞

Probability and Random Variables 14


Typical Plots of cdf II

Fx ( x )

1.0
(b)

x
−∞ 0 ∞
Fx ( x )

1.0
(c)

x
−∞ 0 ∞

Probability and Random Variables 15


Probability Density Function (pdf)

The pdf is defined as the derivative of the cdf:


dFx (x)
fx (x) = .
dx
It follows that:
P (x1 ≤ x ≤ x2 ) = P (x ≤ x2 ) − P (x ≤ x1 )
Z x2
= Fx (x2 ) − Fx (x1 ) = fx (x)dx.
x1
Basic properties of pdf:
1. fRx (x) ≥ 0.

2. −∞ fx (x)dx = 1.
R
3. In general, P (x ∈ A) = A fx (x)dx.
For discrete random variables, it is more common to define the
probability mass function (pmf): pi = P (x = xi ).
P
Note that, for all i, one has pi ≥ 0 and i pi = 1.
Probability and Random Variables 16
Bernoulli Random Variable

fx ( x ) Fx ( x )

1− p
( p)
(1 − p )
x x
0 1 0 1

A discrete random variable that takes two values 1 and 0 with


probabilities p and 1 − p.
Good model for a binary data source whose output is 1 or 0.
Can also be used to model the channel errors.

Probability and Random Variables 17


Binomial Random Variable
fx ( x )

0.30

0.25

0.20

0.15

0.10

0.05

x
0 2 4 6

A discrete random variable that gives the number of 1’s in a


sequence of nindependent Bernoulli trials.
n   
X n k n−k n n!
fx (x) = p (1 − p) δ(x − k), where = .
k k k!(n − k)!
k=0

Probability and Random Variables 18


Uniform Random Variable

fx ( x ) Fx ( x )

1
1
b−a

x x
a 0 b a 0 b

A continuous random variable that takes values between a and b


with equal probabilities over intervals of equal length.
The phase of a received sinusoidal carrier is usually modeled as a
uniform random variable between 0 and 2π. Quantization error is
also typically modeled as uniform.

Probability and Random Variables 19


Gaussian (or Normal) Random Variable
fx ( x ) Fx ( x )
1
2πσ 2 1

1
2

x x
0 µ 0 µ

A continuous random variable whose pdf is:


(x − µ)2
 
1
fx (x) = √ exp − ,
2πσ 2 2σ 2
µ and σ 2 are parameters. Usually denoted as N (µ, σ 2 ).
Most important and frequently encountered random variable in
communications.
Probability and Random Variables 20
Functions of A Random Variable

The function y = g(x) is itself a random variable.


From the definition, the cdf of y can be written as

Fy (y) = P (ω ∈ Ω : g(x(ω)) ≤ y).

Assume that for all y, the equation g(x) = y has a countable


number of solutions and at each solution point, dg(x)/dx exists and
is nonzero. Then the pdf of y = g(x) is:
X fx (xi )
fy (y) = ,
i dg(x)
dx x=x
i

where {xi } are the solutions of g(x) = y.


A linear function of a Gaussian random variable is itself a Gaussian
random variable.
Probability and Random Variables 21
Expectation of Random Variables I

Statistical averages, or moments, play an important role in the


characterization of the random variable.
The expected value (also called the mean value, first moment) of
the random variable x is defined as
Z ∞
mx = E{x} ≡ xfx (x)dx,
−∞

where E denotes the statistical expectation operator.


In general, the nth moment of x is defined as
Z ∞
n
E{x } ≡ xn fx (x)dx.
−∞

Probability and Random Variables 22


Expectation of Random Variables II

For n = 2, E{x2 } is known as the mean-squared value of the


random variable.
The nth central moment of the random variable x is:
Z ∞
E{y} = E{(x − mx )n } = (x − mx )n fx (x)dx.
−∞

When n = 2 the central moment is called the variance, commonly


denoted as σx2 :
Z ∞
2 2
σx = var(x) = E{(x − mx ) } = (x − mx )2 fx (x)dx.
−∞

The variance provides a measure of the variable’s “randomness”.

Probability and Random Variables 23


Expectation of Random Variables III

The mean and variance of a random variable give a partial


description of its pdf.
Relationship between the variance, the first and second moments:

σx2 = E{x2 } − [E{x}]2 = E{x2 } − m2x .

An electrical engineering interpretation: The AC power equals total


power minus DC power.
The square-root of the variance is known as the standard deviation,
and can be interpreted as the root-mean-squared (RMS) value of
the AC component.

Probability and Random Variables 24


The Gaussian Random Variable

(a) A muscle (emg) signal


0.6

0.4

0.2
Signal amplitude (volts)

−0.2

−0.4

−0.6

−0.8
0 0.2 0.4 0.6 0.8 1
t (sec)

Probability and Random Variables 25


(b) Histogram and pdf fits
4
Histogram
3.5 Gaussian fit
Laplacian fit

fx(x) (1/volts) 2.5

1.5

0.5

0
−1 −0.5 0 0.5 1
x (volts)
2
1 (x−m )
− 2σ2x
fx (x) = p e x (Gaussian)
2πσx2
a −a|x|
fx (x) = e (Laplacian)
2

Probability and Random Variables 26


Gaussian Distribution (Univariate)
0.4
σx=1
0.35 σx=2
σx=5
0.3

0.25
fx(x)

0.2

0.15

0.1

0.05

0
−15 −10 −5 0 5 10 15
x
Range (±kσx ) k=1 k=2 k=3 k=4
P (mx − kσx < x ≤ mx − kσx ) 0.683 0.955 0.997 0.999
Error probability 10−3 10−4 10−6 10−8
Distance from the mean 3.09 3.72 4.75 5.61

Probability and Random Variables 27


Multiple Random Variables I

Often encountered when dealing with combined experiments or


repeated trials of a single experiment.
Multiple random variables are basically multidimensional functions
defined on a sample space of a combined experiment.
Let x and y be the two random variables defined on the same
sample space Ω. The joint cumulative distribution function is
defined as
Fx,y (x, y) = P (x ≤ x, y ≤ y).
Similarly, the joint probability density function is:

∂ 2 Fx,y (x, y)
fx,y (x, y) = .
∂x∂y

Probability and Random Variables 28


Multiple Random Variables II

When the joint pdf is integrated over one of the variables, one
obtains the pdf of other variable, called the marginal pdf:
Z ∞
fx,y (x, y)dx = fy (y),
Z−∞

fx,y (x, y)dy = fx (x).
−∞

Note that:
Z ∞ Z ∞
fx,y (x, y)dxdy = F (∞, ∞) = 1
−∞ −∞
Fx,y (−∞, −∞) = Fx,y (−∞, y) = Fx,y (x, −∞) = 0.

Probability and Random Variables 29


Multiple Random Variables III

The conditional pdf of the random variable y, given that the value
of the random variable x is equal to x, is defined as
(
fx,y (x,y)
fy (y|x) = fx (x) , fx (x) 6= 0 .
0, otherwise

Two random variables x and y are statistically independent if and


only if

fy (y|x) = fy (y) or equivalently fx,y (x, y) = fx (x)fy (y).

The joint moment is defined as


Z ∞Z ∞
j k
E{x y } = xj y k fx,y (x, y)dxdy.
−∞ −∞

Probability and Random Variables 30


Multiple Random Variables IV

The joint central moment is


Z ∞Z ∞
j k
E{(x − mx ) (y − my ) } = (x − mx )j (y − my )k fx,y (x, y)dxdy
−∞ −∞

where mx = E{x} and my = E{y}.


The most important moments are
Z ∞Z ∞
E{xy} ≡ xyfx,y (x, y)dxdy (correlation)
−∞ −∞

cov{x, y} ≡ E{(x − mx )(y − my )}


= E{xy} − mx my (covariance).

Probability and Random Variables 31


Multiple Random Variables V

Let σx2 and σy2 be the variance of x and y. The covariance


normalized w.r.t. σx σy is called the correlation coefficient:

cov{x, y}
ρx,y = .
σx σy
ρx,y indicates the degree of linear dependence between two random
variables.
It can be shown that |ρx,y | ≤ 1.
ρx,y = ±1 implies an increasing/decreasing linear relationship.
If ρx,y = 0, x and y are said to be uncorrelated.
It is easy to verify that if x and y are independent, then ρx,y = 0:
Independence implies lack of correlation.
However, lack of correlation (no linear relationship) does not in
general imply statistical independence.
Probability and Random Variables 32
Examples of Uncorrelated Dependent Random Variables

Example 1: Let x be a discrete random variable that takes on


{−1, 0, 1} with probabilities { 14 , 21 , 41 }, respectively. The random
variables y = x3 and z = x2 are uncorrelated but dependent.
Example 2: Let x be an uniformly random variable over [−1, 1].
Then the random variables y = x and z = x2 are uncorrelated but
dependent.
Example 3: Let x be a Gaussian random variable with zero mean
and unit variance (standard normal distribution). The random
variables y = x and z = |x| are uncorrelated but dependent.
Example 4: Let u and v be two random variables (discrete or
continuous) with the same probability density function. Then
x = u − v and y = u + v are uncorrelated dependent random
variables.
Probability and Random Variables 33
Example 1

x ∈ {−1, 0, 1} with probabilities {1/4, 1/2, 1/4}


⇒ y = x3 ∈ {−1, 0, 1} with probabilities {1/4, 1/2, 1/4}
⇒ z = x2 ∈ {0, 1} with probabilities {1/2, 1/2}
my = (−1) 41 + (0) 21 + (1) 14 = 0; mz = (0) 12 + (1) 21 = 21 .
The joint pmf (similar to pdf) of y and z:

1
1 1 P (y = −1, z = 0) = 0
2 z
4 4 P (y = −1, z = 1) = P (x = −1) = 1/4
1
P (y = 0, z = 0) = P (x = 0) = 1/2
−1 P (y = 0, z = 1) = 0
0 1 y P (y = 1, z = 0) = 0
P (y = 1, z = 1) = P (x = 1) = 1/4
Therefore, E{yz} = (−1)(1) 14 + (0)(0) 12 + (1)(1) 41 =0
⇒ cov{y, z} = E{yz} − my mz = 0 − (0)1/2 = 0!
Probability and Random Variables 34
Jointly Gaussian Distribution (Bivariate)

1 1
fx,y (x, y) = exp −
2(1 − ρ2x,y )
q
2πσx σy 1 − ρ2x,y
(x − mx )2 2ρx,y (x − mx )(y − my ) (y − my )2
 
× − + ,
σx2 σx σy σy2
where mx , my , σx , σy are the means and variances.
ρx,y is indeed the correlation coefficient.
Marginal density is Gaussian: fx (x) ∼ N (mx , σx2 ) and
fy (y) ∼ N (my , σy2 ).
When ρx,y = 0 → fx,y (x, y) = fx (x)fy (y) → random variables x
and y are statistically independent.
Uncorrelatedness means that joint Gaussian random variables are
statistically independent. The converse is not true.
Weighted sum of two jointly Gaussian random variables is also
Gaussian.
Probability and Random Variables 35
Joint pdf and Contours for σx = σy = 1 and ρx,y = 0

ρ =0
x,y

0.15
2.5

2
0.1
fx,y(x,y)

0.05
0

y
0
−1
2 3
2
0 1
0 −2
−2 −1
−2 −2.5
y −3 −2 −1 0 1 2
x x

Probability and Random Variables 36


Joint pdf and Contours for σx = σy = 1 and ρx,y = 0.3

ρ =0.30
x,y

a cross−section
0.15 2.5

0.1
fx,y(x,y)

0.05
0

y
0
−1
2 3
2
0 1
0 −2
−2 −1
−2 −2.5
y −3 −2 −1 0 1 2
x x

Probability and Random Variables 37


Joint pdf and Contours for σx = σy = 1 and ρx,y = 0.7

ρ =0.70
x,y

0.2 2.5

2
0.15
a cross−section
fx,y(x,y)

0.1 1

0.05 0

y
0
−1
2 3
2
0 1
0 −2
−2 −1
−2 −2.5
y −3 −2 −1 0 1 2
x x

Probability and Random Variables 38


Joint pdf and Contours for σx = σy = 1 and ρx,y = 0.95

ρx,y=0.95

0.5
2.5
0.4
2
0.3
fx,y(x,y)

1
0.2

0.1 0

y
0
−1
2 3
2
0 1
0 −2
−2 −1
−2 −2.5
y −3 −2 −1 0 1 2
x x

Probability and Random Variables 39


Multivariate Gaussian pdf

Define →
−x = [x1 , x2 , . . . , xn ], a vector of the means


m = [m1 , m2 , . . . , mn ], and the n × n covariance matrix C with
Ci,j = cov(xi , xj ) = E{(xi − mi )(xj − mj )}.
The random variables {xi }ni=1 are jointly Gaussian if:

1
fx1 ,x2 ,...,xn (x1 , x2 , . . . , xn ) = p ×
n
(2π) det(C)
 
1 →
− →
− −1 →
− →
− >
exp − ( x − m)C ( x − m) .
2

If C is diagonal (i.e., the random variables {xi }ni=1 are all


uncorrelated), the joint pdf is a product of the marginal pdfs:
Uncorrelatedness implies statistical independent for multiple
Gaussian random variables.
Probability and Random Variables 40
Outline

1 Introduction

2 Probability and Random Variables

3 Random Processes

Random Processes 41
Random Processes I

ωj
ω
3
ω ω
1 M
ω
2
x(t,ω)
Real number

x(t ,ω)
k
x1(t,ω1)

x (t,ω )
2 2

. .
. .
xM(t,ωM) . .
. .
. .
t1 t2 ... tk Time
. .

Random Processes 42
Random Processes II

A mapping from a sample space to a set of time functions.


Ensemble: The set of possible time functions that one sees.
Denote this set by x(t), where the time functions x1 (t, ω1 ),
x2 (t, ω2 ), x3 (t, ω3 ), . . . are specific members of the ensemble.
At any time instant, t = tk , we have random variable x(tk ).
At any two time instants, say t1 and t2 , we have two different
random variables x(t1 ) and x(t2 ).
Any relationship between them is described by the joint pdf
fx(t1 ),x(t2 ) (x1 , x2 ; t1 , t2 ).
A complete description of the random process is determined by the
joint pdf fx(t1 ),x(t2 ),...,x(tN ) (x1 , x2 , . . . , xN ; t1 , t2 , . . . , tN ).
The most important joint pdfs are the first-order pdf fx(t) (x; t) and
the second-order pdf fx(t1 )x(t2 ) (x1 , x2 ; t1 , t2 ).
Random Processes 43
Examples of Random Processes I

(b) Uniform phase

(a) Thermal noise

0 t 0 t

Random Processes 44
Examples of Random Processes II

(d) Binary random data


+V

(c) Rayleigh fading process −V

Tb

0 t 0 t

Random Processes 45
Classification of Random Processes
Based on whether its statistics change with time: the process is
non-stationary or stationary.
Different levels of stationarity:
I Strictly stationary: the joint pdf of any order is independent of a shift
in time.
I N th-order stationarity: the joint pdf does not depend on the time
shift, but depends on time spacings:
fx(t1 ),x(t2 ),...x(tN ) (x1 , x2 , . . . , xN ; t1 , t2 , . . . , tN ) =
fx(t1 +t),x(t2 +t),...x(tN +t) (x1 , x2 , . . . , xN ; t1 + t, t2 + t, . . . , tN + t).

The first- and second-order stationarity:


fx(t1 ) (x, t1 ) = fx(t1 +t) (x; t1 + t) = fx(t) (x)

fx(t1 ),x(t2 ) (x1 , x2 ; t1 , t2 ) = fx(t1 +t),x(t2 +t) (x1 , x2 ; t1 + t, t2 + t)


= fx(t1 ),x(t2 ) (x1 , x2 ; τ ), τ = t2 − t1 .
Random Processes 46
Statistical Averages or Joint Moments

Consider N random variables x(t1 ), x(t2 ), . . . x(tN ). The joint


moments of these random variables is
Z ∞ Z ∞
k1 k2 kN
E{x (t1 ), x (t2 ), . . . x (tN )} = ···
x1 =−∞ xN =−∞
kN
xk11 xk22 · · · xN fx(t1 ),x(t2 ),...x(tN ) (x1 , x2 , . . . , xN ; t1 , t2 , . . . , tN )
dx1 dx2 . . . dxN ,

for all integers kj ≥ 1 and N ≥ 1.


Shall only consider the first- and second-order moments, i.e.,
E{x(t)}, E{x2 (t)} and E{x(t1 )x(t2 )}. They are the mean value,
mean-squared value and (auto)correlation.

Random Processes 47
Mean Value or the First Moment

The mean value of the process at time t is


Z ∞
mx (t) = E{x(t)} = xfx(t) (x; t)dx.
−∞

The average is across the ensemble and if the pdf varies with time
then the mean value is a (deterministic) function of time.
If the process is stationary then the mean is independent of t or a
constant: Z ∞
mx = E{x(t)} = xfx (x)dx.
−∞

Random Processes 48
Mean-Squared Value or the Second Moment

This is defined as
Z ∞
2
MSVx (t) = E{x (t)} = x2 fx(t) (x; t)dx (non-stationary),
−∞
Z ∞
2
MSVx = E{x (t)} = x2 fx (x)dx (stationary).
−∞

The second central moment (or the variance) is:

σx2 (t) = E [x(t) − mx (t)]2 = MSVx (t) − m2x (t) (non-stationary),




σx2 = E [x(t) − mx ]2 = MSVx − m2x (stationary).




Random Processes 49
Correlation

The autocorrelation function completely describes the power


spectral density of the random process.
Defined as the correlation between the two random variables
x1 = x(t1 ) and x2 = x(t2 ):
Rx (t1 , t2 ) = E{x(t1 )x(t2 )}
Z ∞ Z ∞
= x1 x2 fx1 ,x2 (x1 , x2 ; t1 , t2 )dx1 dx2 .
x1 =−∞ x2 =−∞

For a stationary process:


Rx (τ ) = E{x(t)x(t + τ )}
Z ∞ Z ∞
= x1 x2 fx1 ,x2 (x1 , x2 ; τ )dx1 dx2 .
x1 =−∞ x2 =−∞

Wide-sense stationarity (WSS) process: E{x(t)} = mx for any t,


and Rx (t1 , t2 ) = Rx (τ ) for τ = t2 − t1 .
Random Processes 50
Properties of the Autocorrelation Function

1. Rx (τ ) = Rx (−τ ). It is an even function of τ because the same set


of product values is averaged across the ensemble, regardless of the
direction of translation.
2. |Rx (τ )| ≤ Rx (0). The maximum always occurs at τ = 0, though
there maybe other values of τ for which it is as big. Further Rx (0)
is the mean-squared value of the random process.
3. If for some τ0 we have Rx (τ0 ) = Rx (0), then for all integers k,
Rx (kτ0 ) = Rx (0).
4. If mx 6= 0 then Rx (τ ) will have a constant component equal to m2x .
5. Autocorrelation functions cannot have an arbitrary shape. The
restriction on the shape arises from the fact that the Fourier
transform of an autocorrelation function must be greater than or
equal to zero, i.e., F{Rx (τ )} ≥ 0.

Random Processes 51
Power Spectral Density of a Random Process I

Taking the Fourier transform of the random process does not work.
Time−domain ensemble Frequency−domain ensemble
x (t,ω ) |X1(f,ω1)|
1 1

0 t

f
0
x2(t,ω2) |X2(f,ω2)|

0 t

f
. 0 .
xM(t,ωM) . |XM(f,ωM)|
.
. .

0 t

f
. 0 .
. .
. .

Random Processes 52
Power Spectral Density of a Random Process II

Need to determine how the average power of the process is


distributed in frequency.
Define a truncated process:

x(t), −T ≤ t ≤ T
xT (t) = .
0, otherwise

Consider the Fourier transform of this truncated process:


Z ∞
XT (f ) = xT (t)e−j2πf t dt. (3)
−∞

Average the energy over the total time, 2T :


Z T Z ∞
1 1
P= x2T (t)dt = |XT (f )|2 df (watts).
2T −T 2T −∞
Random Processes 53
Power Spectral Density of a Random Process III

Find the average value of P:


 Z T   Z ∞ 
1 2 1 2
E{P} = E x (t)dt = E |XT (f )| df .
2T −T T 2T −∞

Take the limit as T → ∞:


Z T Z ∞ n
1 1 o
E |XT (f )|2 df,
 2
lim E xT (t) dt = lim
T →∞ 2T −T T →∞ 2T −∞

It follows that
Z T
1
E x2T (t) dt

MSVx = lim
T →∞ 2T −T
n o
Z ∞ E |XT (f )|2
= lim df (watts).
−∞ T →∞ 2T
Random Processes 54
Power Spectral Density of a Random Process IV

Finally,
n o
E |XT (f )|2
Sx (f ) = lim (watts/Hz),
T →∞ 2T
is the power spectral density of the process.
It can be shown that the power spectral density and the
autocorrelation function are a Fourier transform pair:
Z ∞
Rx (τ ) ←→ Sx (f ) = Rx (τ )e−j2πf τ dτ.
τ =−∞

Random Processes 55
Time Averaging and Ergodicity

A process where any member of the ensemble exhibits the same


statistical behavior as that of the whole ensemble.
All time averages on a single ensemble member are equal to the
corresponding ensemble average:
Z ∞
E{xn (t))} = xn fx (x)dx
−∞
Z T
1
= lim [xk (t, ωk )]n dt, ∀ n, k.
T →∞ 2T −T

For an ergodic process: To measure various statistical averages, it is


sufficient to look at only one realization of the process and find the
corresponding time average.
For a process to be ergodic it must be stationary. The converse is
not true.
Random Processes 56
Examples of Random Processes

(Example 3.4) x(t) = A cos(2πf0 t + Θ), where Θ is a random


variable uniformly distributed on [0, 2π]. This process is both
stationary and ergodic.
(Example 3.5) x(t) = x, where x is a random variable uniformly
distributed on [−A, A], where A > 0. This process is WSS, but not
ergodic.
(Example 3.6) x(t) = A cos(2πf0 t + Θ) where A is a zero-mean
random variable with variance, σA 2 , and Θ is uniform in [0, 2π].

Furthermore, A and Θ are statistically independent. This process is


not ergodic, but strictly stationary.

Random Processes 57
Random Processes and LTI Systems

Input Linear, Time-Invariant Output


(LTI) System
x(t ) h(t ) ←
→H( f ) y (t )

Rx , y (τ )

mx , Rx (τ ) ←
→ Sx ( f ) my , Ry (τ ) ←
→ Sy ( f )
Z ∞ 
my = E{y[n]} = E h(λ)x(t − λ)dλ = mx H(0)
−∞
Sy (f ) = |H(f )|2 Sx (f )
Ry (τ ) = h(τ ) ∗ h(−τ ) ∗ Rx (τ ).

Random Processes 58
Thermal Noise in Communication Systems

A natural noise source is thermal noise, whose amplitude statistics


are well modeled to be Gaussian with zero mean.
The autocorrelation and PSD are well modeled as:

e−|τ |/t0
Rw (τ ) = kθG (watts),
t0
2kθG
Sw (f ) = (watts/Hz).
1 + (2πf t0 )2

where k = 1.38 × 10−23 joule/0 K is Boltzmann’s constant, G is


conductance of the resistor (mhos); θ is temperature in degrees
Kelvin; and t0 is the statistical average of time intervals between
collisions of free electrons in the resistor (on the order of 10−12 sec).

Random Processes 59
(a) Power Spectral Density, S (f)
w

N0/2

Sw(f) (watts/Hz)

White noise
Thermal noise
0
−15 −10 −5 0 5 10 15
f (GHz)
(b) Autocorrelation, Rw(τ)

N0/2δ(τ)
Rw(τ) (watts)

White noise
Thermal noise
−0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1
τ (pico−sec)

Random Processes 60
The noise PSD is approximately flat over the frequency range of 0
to 10 GHz ⇒ let the spectrum be flat from 0 to ∞:
N0
Sw (f ) = (watts/Hz),
2
where N0 = 4kθG is a constant.
Noise that has a uniform spectrum over the entire frequency range
is referred to as white noise
The autocorrelation of white noise is
N0
Rw (τ ) = δ(τ ) (watts).
2
Since Rw (τ ) = 0 for τ 6= 0, any two different samples of white
noise, no matter how close in time they are taken, are uncorrelated.
Since the noise samples of white noise are uncorrelated, if the noise
is both white and Gaussian (for example, thermal noise) then the
noise samples are also independent.
Random Processes 61
Example

Suppose that a (WSS) white noise process, x[n], of zero-mean and


power spectral density N0 /2 is applied to the input of the filter.
(a) Find and sketch the power spectral density and autocorrelation
function of the random process y[n] at the output of the filter.
(b) What are the mean and variance of the output process y[n]?

x(t ) R y (t )

Random Processes 62
R 1
H(f ) = = .
R + j2πf L 1 + j2πf L/R
N0 1 N0 R −(R/L)|τ |
Sy (f ) = ←→ Ry (τ ) = e .
2 1+ 2πL 2 2 4L

R f

S y ( f ) (watts/Hz) Ry (τ ) (watts)
N0
2 N0R
4L

0 f (Hz) 0 τ (sec)

Random Processes 63

You might also like