0% found this document useful (0 votes)
9 views

Comm 05 Random Variables and Processes 1

Chapter 5 discusses random variables and processes, focusing on concepts such as probability, random variables, statistical averages, and random processes. It introduces key definitions and properties related to probability measures, conditional probabilities, and various types of distributions, including uniform and binomial distributions. The chapter also covers the transmission of random processes through linear filters and the implications of noise in communication systems.

Uploaded by

23uelec006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Comm 05 Random Variables and Processes 1

Chapter 5 discusses random variables and processes, focusing on concepts such as probability, random variables, statistical averages, and random processes. It introduces key definitions and properties related to probability measures, conditional probabilities, and various types of distributions, including uniform and binomial distributions. The chapter also covers the transmission of random processes through linear filters and the implications of noise in communication systems.

Uploaded by

23uelec006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Chapter 5

Random Variables and Processes

Wireless Information Transmission System Lab.


Institute of Communications Engineering
National Sun Yat-sen University
Table of Contents

◊ 5.1 Introduction
◊ 5.2 Probability
◊ 5.3 Random Variables
◊ 5.4 Statistical Averages
◊ 5.5 Random Processes
◊ 5.6 Mean, Correlation and Covariance Functions
◊ 5.7 Transmission of a Random Process through a Linear Filter
◊ 5.8 Power Spectral Density
◊ 5.9 Gaussian Process
◊ 5.10 Noise
◊ 5.11 Narrowband Noise

2
5.1 Introduction

◊ Fourier transform is a mathematical tool for the representation of


deterministic signals.

◊ Deterministic signals: the class of signals that may be modeled as


completely specified functions of time.

◊ A signal is “random” if it is not possible to predict its precise value


in advance.

◊ A random process consists of an ensemble (family) of sample


functions, each of which varies randomly with time.

◊ A random variable is obtained by observing a random process at a


fixed instant of time.

3
5.2 Probability
◊ Probability theory is rooted in phenomena that, explicitly or
implicitly, can be modeled by an experiment with an outcome that is
subject to chance.
◊ Example: Experiment may be the observation of the result of
tossing a fair coin. In this experiment, the possible outcomes of a
trial are “heads” or “tails”.
◊ If an experiment has K possible outcomes, then for the kth possible
outcome we have a point called the sample point, which we denote
by sk. With this basic framework, we make the following definitions:
◊ The set of all possible outcomes of the experiment is called the

sample space, which we denote by S.


◊ An event corresponds to either a single sample point or a set of

sample points in the space S.

4
5.2 Probability
◊ A single sample point is called an elementary event.
◊ The entire sample space S is called the sure event; and the null set

 is called the null or impossible event.


◊ Two events are mutually exclusive if the occurrence of one event

precludes the occurrence of the other event.


◊ A probability measure P is a function that assigns a non-negative
number to an event A in the sample space S and satisfies the
following three properties (axioms):
1. 0  P  A  1  5.1
2. P  S   1  5.2 
3. If A and B are two mutually exclusive events, then
P  A  B   P  A  P  B   5.3

5
5.2 Probability

6
5.2 Probability
◊ The following properties of probability measure P may be derived
from the above axioms:

1. P  A   1  P  A  5.4 
2. When events A and B are not mutually exclusive:
P  A  B   P  A  P  B   P  A  B   5.5
3. If A1 ,A2 ,...,Am are mutually exclusive events that include all
possible outcomes of the random experiment, then
P  A1   P  A2     P  Am   1  5.6 

7
5.2 Probability

◊ Let P[B|A] denote the probability of event B, given that event A has
occurred. The probability P[B|A] is called the conditional
probability of B given A.
◊ P[B|A] is defined by
P  A  B
P  B A   5.7 
P  A
◊ Bayes’ rule
◊ We may write Eq.(5.7) as P[A∩B] = P[B|A]P[A] (5.8)
◊ It is apparent that we may also write P[A∩B] = P[A|B]P[B] (5.9)
◊ From Eqs.(5.8) and (5.9), provided P[A] ≠ 0, we may determine P[B|A] by
using the relation
P  A B  P  B 
P  B A   5.10 
P  A

8
5.2 Conditional Probability
◊ Suppose that the condition probability P[B|A] is simply equal to the
elementary probability of occurrence of event B, that is

P  B A  P  B  P  A  B   P  A P  B 

so that P  A  B P  A P  B 
P  A B     P  A  5.13
P  B P  B
◊ Events A and B that satisfy this condition are said to be
statistically independent.

9
5.2 Conditional Probability
◊ Example 5.1 Binary Symmetric Channel
◊ This channel is said to be discrete in that it is designed to handle
discrete messages.
◊ The channel is memoryless in the sense that the channel output at
any time depends only on the channel input at that time.
◊ The channel is symmetric, which means that the probability of
receiving symbol 1 when 0 is sent is the same as the probability
of receiving symbol 0 when symbol 1 is sent.

10
5.2 Conditional Probability

◊ Example 5.1 Binary Symmetric Channel (continued)


◊ The a priori probabilities of sending binary symbols 0 and 1:
P  A0   p0 P  A1   p1
◊ The conditional probabilities of error:

P  B1 A0   P  B0 A1   p
◊ The probability of receiving symbol 0 is given by:

P  B0   P  B0 A0  P  A0   P  B0 A1  P  A1   1  p  p0  pp1
◊ The probability of receiving symbol 1 is given by:

P  B1   P  B1 A0  P  A0   P  B1 A1  P  A1   pp0  1  p  p1
11
5.2 Conditional Probability

◊ Example 5.1 Binary Symmetric Channel (continued)


◊ The a posteriori probabilities P[A0|B0] and P[A1|B1]:

P  B0 A0  P  A0  1  p  p0
P  A0 B0   
P  B0  1  p  p0  pp1

P  B1 A1  P  A1  1  p  p1
P  A1 B1   
P  B1  pp0  1  p  p1

12
5.3 Random Variables
◊ We denote the random variable as X(s) or just X.
◊ X is a function, s is the outcome of the experiment.
◊ Random variable may be discrete or continuous.
◊ Consider the random variable X and the probability of the event
X ≤ x. We denote this probability by P[X ≤ x].
◊ To simplify our notation, we write
FX  x   P  X  x   5.15
◊ The function FX(x) is called the cumulative distribution
function (cdf) or simply the distribution function of the random
variable X.
◊ The distribution function FX(x) has the following properties:
1. 0  Fx  x   1
2. Fx  x1   Fx  x2  if x1  x2
13
5.3 Random Variables

There may be more than one


random variable associated with
the same random experiment.

14
5.3 Random Variables
◊ If the distribution function is continuously differentiable, then
d
fX  x  FX  x   5.17 
dx
◊ fX(x) is called the probability density function (pdf) of the random
variable X.
◊ Probability of the event x1 < X ≤ x2 equals

P  x1  X  x2   P  X  x2   P  X  x1 
x
 FX  x2   FX  x1   FX  x    f X  ξ dξ
x1 
  5.19 

x2
  f X  x dx
x1

◊ Probability density function must always be a nonnegative function,


and with a total area of one.
15
5.3 Random Variables
◊ Example 5.2 Uniform Distribution

0, xa
 1
fX  x   , a xb
b  a
0, xb

0, xa
 x  a
FX  x    , a xb
b  a
0, xb

16
5.3 Random Variables

◊ Several Random Variables


◊ Consider two random variables X and Y. We define the joint
distribution function FX,Y(x,y) as the probability that the random
variable X is less than or equal to a specified value x and that the
random variable Y is less than or equal to a specified value y.
FX ,Y  x,y   P  X  x,Y  y   5.23
◊ Suppose that joint distribution function FX,Y(x,y) is continuous
everywhere, and that the partial derivative
 2 FX ,Y  x, y 
f X ,Y  x, y    5.24 
xy
exists and is continuous everywhere. We call the function fX,Y(x,y)
the joint probability density function of the random variables X
and Y.
17
5.3 Random Variables

◊ Several Random Variables


◊ The joint distribution function FX,Y(x,y) is a monotone-
nondecreasing function of both x and y.
 
  f X,Y  ,  d d  1

 

◊ Marginal density fX(x)


 x 
FX  x     f X ,Y  ,  d  d 
 fX  x   f X ,Y  x,  d  5.27 
  

◊ Suppose that X and Y are two continuous random variables with


joint probability density function fX,Y(x,y). The conditional
probability density function of Y given that X = x is defined by

f X ,Y  x,y 
fY  y x    5.28
fX  x
18
5.3 Random Variables

◊ Several Random Variables


◊ If the random variable X and Y are statistically independent, then
knowledge of the outcome of X can in no way affect the
distribution of Y.

fY  y x   fY  y  
by 5.28 
f X ,Y  x, y   f X  x  fY  y   5.32 

P  X  A, Y  B   P  X  A P Y  B   5.33

19
5.3 Random Variables
◊ Example 5.3 Binomial Random Variable
◊ Consider a sequence of coin-tossing experiments where the
probability of a head is p and let Xn be the Bernoulli random
variable representing the outcome of the nth toss.
◊ Let Y be the number of heads that occur on N tosses of the coins:
N
Y   Xn
n 1

N y
P Y  y     p 1  p 
Ny

 y
N N!
 y   y! N  y !
   
20
5.4 Statistical Averages

◊ The expected value or mean of a random variable X is defined by



 x  E  X    xf X  x  dx  5.36 


◊ Function of a Random Variable


◊ Let X denote a random variable, and let g(X) denote a real-
valued function defined on the real line. We denote as
Y  gX   5.37 
◊ To find the expected value of the random variable Y.
 
E Y    yfY  y  dy 
 E  g  X     g  x  f x  x  dx  5.38
 

21
5.4 Statistical Averages

◊ Example 5.4 Cosinusoidal Random Variable


◊ Let Y=g(X)=cos(X)
◊ X is a random variable uniformly distributed in the interval (-π, π)

 1
 ,   x  
f X  x    2
 0, otherwise

  1 
E Y     cos x    dx

 2 
1
 sin x x
2
0
22
5.4 Statistical Averages

◊ Moments
◊ For the special case of g(X) = X n, we obtain the nth moment of
the probability distribution of the random variable X; that is

E  X    x n f X  x  dx
n
 5.39 


◊ Mean-square value of X :

E  X    x 2 f X  x  dx
2
 5.40 


◊ The nth central moment is



E  X   X      x   X  f X  x  dx  5.41
n n

  

23
5.4 Statistical Averages
◊ For n = 2 the second central moment is referred to as the variance of
the random variable X, written as

Var  X   E  X   X      x   X  f X  x  dx  5.42 
2 2

  
◊ The variance of a random variable X is commonly denoted as  X2 .
◊ The square root of the variance is called the standard deviation of
the random variable X.
  Var  X   E  X   X  

◊ 2 2
X
 
 E  X 2  2  X X   X2 
 E  X 2   2  X E  X    X2
 E  X 2    X2  5.44 
24
5.4 Statistical Averages
◊ Characteristic function  X   is defined as the expectation of the
complex exponential function exp( jυX ), as shown by

 X  j   E exp  j X     f X  x  exp  j X  dx  5.45


◊ In other words , the characteristic function  X   is the Fourier


transform of the probability density function fX(x).

◊ Analogous with the inverse Fourier transform:

1 
fX  x    X  j  exp   j X  d  5.46 
2 

25
5.4 Statistical Averages

◊ Characteristic functions
◊ First moment (mean) can be obtained by:

d ( jv)
E ( X )  mx   j
dv v 0
◊ Since the differentiation process can be repeated, n-th
moment can be calculated by:

d n
 ( jv )
E ( X n )  ( j ) n
dv n v  0

26
5.4 Statistical Averages

◊ Characteristic functions
◊ Determining the PDF of a sum of statistically independent
random variables:
n
  n 
Y   Xi   Y ( jv)  E (e jvY
)  E exp  jv X i  
i 1   i 1  
 n jvX i 
  n
jvxi 
 

 E  e     
 ... e  f X1 , X 2 ,..., X n ( x1 , x2 ,..., xn )dx1dx2 ...dxn
 i 1   i 1 
Since the random variables are statistically independent,
n
f X1 , X 2 ,..., X n ( x1 , x2 ,..., xn )  f X1 ( x1 ) f X 2 ( x2 )... f X n ( xn )   Y ( jv)   X i ( jv)
i 1

If X i are iid (independent and identically distributed)


 Y ( jv)   X ( jv)
n

27
5.4 Statistical Averages

◊ Characteristic functions
◊ The PDF of Y is determined from the inverse Fourier
transform of ΨY(jv).
◊ Since the characteristic function of the sum of n statistically
independent random variables is equal to the product of the
characteristic functions of the individual random variables, it
follows that, in the transform domain, the PDF of Y is the n-
fold convolution of the PDFs of the Xi.
◊ Usually, the n-fold convolution is more difficult to perform
than the characteristic function method in determining the PDF
of Y.

28
5.4 Statistical Averages

◊ Example 5.5 Gaussian Random Variable


◊ The probability density function of such a Gaussian random
variable is defined by:
  x  X  
2
1
fX  x  exp   ,    x  
2 X  2  2

 X 
◊ The characteristic function of a Gaussian random variable with
mean mx and variance σ2 is (Problem 5.1):
jvx  1   x  m x 2 / 2 2 

  jv    e  e  dx  e jvm x  1 / 2 v 2 2

 2  
◊ It can be shown that the central moments of a Gaussian random
variable are given by:
 
       k
1 3 ( k 1 ) (even k )
E ( X  mx )   k  
k

0 (odd k )
29
5.4 Statistical Averages

◊ Example 5.5 Gaussian Random Variable (cont.)


◊ The sum of n statistically independent Gaussian random
variables is also a Gaussian random variable.
◊ Proof:
n
Y   Xi
i 1
n n
 Y  jv   X  jv   e jvmi v 2 i2 / 2 jvmy v 2 y2 / 2
i
e
i 1 i 1
n n
where my   mi and    i2 2
y
i 1 i 1

Therefore,Y is Gaussian- distributed with mean my


and variance y2 .
30
5.4 Statistical Averages

◊ Joint Moments
◊ Consider next a pair of random variables X and Y. A set of
statistical averages of importance in this case are the joint
moments, namely, the expected value of Xi Y k, where i and k
may assume any positive integer values. We may thus write
 
E  X Y     x i y k f X ,Y  x, y  dxdy
i k
 5.51
 

◊ A joint moment of particular importance is the correlation


defined by E[XY], which corresponds to i = k = 1.

◊ Covariance of X and Y :
Cov  XY   E  X  E  X  Y  E Y   = E  XY    X Y  5.53
31
5.4 Statistical Averages
◊ Correlation coefficient of X and Y :
Cov  XY 
  5.54 
 XY

◊ σX and σY denote the variances of X and Y.

◊ We say X and Y are uncorrelated if and only if Cov[XY] = 0.


◊ Note that if X and Y are statistically independent, then they are

uncorrelated.
◊ The converse of the above statement is not necessarily true.

◊ We say X and Y are orthogonal if and only if E[XY] = 0.

32
5.4 Statistical Averages

◊ Example 5.6 Moments of a Bernoulli Random Variable


◊ Consider the coin-tossing experiment where the probability of a
head is p. Let X be a random variable that takes the value 0 if the
result is a tail and 1 if it is a head. We say that X is a Bernoulli
random variable.
1  p x  0 1
 E  X    kP  X  k   0  1  p   1  p  p
P  X  x   p x 1 k 0
0
 otherwise E  X j  E  X k  jk
  
1
E  X j X k   
 E  X j  jk
2
   k  X  P X  k 
2 2
X
k 0
 p2 jk
  0  p  1  p   1  p  
2 2

p jk
 p 1  p 
where the E  X 2j    k 0 k 2 P  X  k .
1

33
5.5 Random Processes

An ensemble of sample functions.

For a fixed time instant tk,  x  t  , x  t  , x  t    X  t , s  , X  t , s  ,, X  t , s  constitutes a random variable.


1 k 2 k n k k 1 k 2 k n

34
5.5 Random Processes

◊ At any given time instant, the value of a stochastic process


is a random variable indexed by the parameter t. We
denote such a process by X(t).

◊ In general, the parameter t is continuous, whereas X may


be either continuous or discrete, depending on the
characteristics of the source that generates the stochastic
process.

◊ The noise voltage generated by a single resistor or a single


information source represents a single realization of the
stochastic process. It is called a sample function.

35
5.5 Random Processes

◊ The set of all possible sample functions constitutes an


ensemble of sample functions or, equivalently, the
stochastic process X(t).
◊ In general, the number of sample functions in the
ensemble is assumed to be extremely large; often it is
infinite.
◊ Having defined a stochastic process X(t) as an ensemble of
sample functions, we may consider the values of the
process at any set of time instants t1>t2>t3>…>tn, where n
is any positive integer.
◊ In general, the random variables X t  X  ti  , i  1, 2,..., n, are
i

 
characterized statistically by their joint PDF f X xt1 , xt2 ,..., xtn .
36
5.5 Random Processes

◊ Stationary stochastic processes


◊ Consider another set of n random variables X ti t  X  ti  t  ,
i  1, 2,..., n, where t is an arbitrary time shift. These random
 
variables are characterized by the joint PDF f X xt1 t , xt2 t ,..., xtn t .
◊ The joint PDFs of the random variables X ti and X ti t ,i  1, 2 ,...,n,
may or may not be identical. When they are identical, i.e., when
  
f X xt1 , xt2 ,..., xtn  f X xt1 t , xt2 t ,..., xtn t 
for all t and all n, it is said to be stationary in the strict sense(SSS).
◊ When the joint PDFs are different, the stochastic process is
non-stationary.

37
5.5 Random Processes

◊ Averages for a stochastic process are called ensemble averages.


◊ The nth moment of the random variable X ti is defined as:

   

E X n
ti   xtni f X xti dxti


◊ In general, the value of the nth moment will depend on the


time instant ti if the PDF of X ti depends on ti .

   
When the process is stationary, f X xti t  f X xti for all t.
Therefore, the PDF is independent of time, and, as a
consequence, the nth moment is independent of time.

38
5.5 Random Processes

◊ Two random variables: X ti  X  ti  , i  1, 2.


◊ The correlation is measured by the joint moment:
   
 
E X t1 X t2    xt1 xt2 f X xt1 , xt2 dxt1 dxt2
 
◊ Since this joint moment depends on the time instants t1 and
t2, it is denoted by RX(t1 ,t2).
◊ RX(t1 ,t2) is called the auto-correlation function of the
stochastic process.
◊ For a stationary stochastic process, the joint moment is:
E ( X t X t )  RX (t1 , t2 )  RX (t1  t2 )  RX ( )
1 2

◊ RX ( )  E ( X t1 X t1  )  E ( X t1  X t1 )  E ( X t ' X t '  )  RX ( )


1 1

◊ Average power in the process X(t): RX(0)=E(Xt2).


39
5.5 Random Processes

◊ Wide-sense stationary (WSS)


◊ A wide-sense stationary process has the property that the
mean value of the process is independent of time (a
constant) and where the autocorrelation function satisfies
the condition that RX(t1,t2)=RX(t1-t2).

◊ Wide-sense stationarity is a less stringent condition than


strict-sense stationarity.

40
5.5 Random Processes

◊ Auto-covariance function
◊ The auto-covariance function of a stochastic process is
defined as:
   
Cov X t1 , X t2  E  X t1  m  t1    X t2  m  t2  
 RX  t1 , t2   m  t1  m  t2 
◊ When the process is stationary, the auto-covariance
function simplifies to:
Cov( X t , X t )  C X (t1  t2 )  C X ( )  RX ( )  m 2
1 2

◊ For a Gaussian random process, higher-order moments can


be expressed in terms of first and second moments.
Consequently, a Gaussian random process is completely
characterized by its first two moments.

41
5.6 Mean, Correlation and Covariance Functions

◊ Consider a random process X(t). We define the mean of the


process X(t) as the expectation of the random variable obtained by
observing the process at some time t, as shown by

 X  t   E  X  t     xf X  t   x  dx  5.57 


◊ A random process is said to be stationary to first order if the


distribution function (and therefore density function) of X(t) does
not vary with time.
f X  t1   x   f X  t2   x  for all t1 and t2   X  t    X for all t  5.59 
◊ The mean of the random process is a constant.
◊ The variance of such a process is also constant.

42
5.6 Mean, Correlation and Covariance Functions

◊ We define the autocorrelation function of the process X(t) as the


expectation of the product of two random variables X(t1) and X(t2).
RX  t1,t2   E  X  t1  X  t2  
 
  x x f X  t1  X  t2   x1 , x2  dx1dx2  5.60 
  1 2

◊ We say a random process X(t) is stationary to second order if the


joint distribution f X  t1  X  t2   x1 , x2  depends on the difference between
the observation time t1 and t2.
RX  t1 , t2   RX  t2  t1  for all t1 and t2  5.61
◊ The autocovariance function of a stationary random process X(t) is
written as
C X  t1 , t2   E  X  t1    X   X  t2    X    RX  t2  t1    X2  5.62 
43
5.6 Mean, Correlation and Covariance Functions

◊ For convenience of notation, we redefine the autocorrelation


function of a stationary process X(t) as

RX    E  X  t    X  t   for all t  5.63


◊ This autocorrelation function has several important properties:

1. RX  0   E  X 2  t    5.64 
2. RX    RX     5.65
3. RX    RX  0   5.67 
◊ Proof of (5.64) can be obtained from (5.63) by putting τ = 0.

44
5.6 Mean, Correlation and Covariance Functions

◊ Proof of (5.65):

RX    E  X  t    X  t    E  X  t  X  t      RX   

◊ Proof of (5.67):

E  X t     X t    0
 2

 
 E  X 2  t      2E  X  t    X  t    E  X 2  t    0
 2 RX  0   2 RX    0
  RX  0   RX    RX  0 
 RX    RX  0 
45
5.6 Mean, Correlation and Covariance Functions

◊ The physical significance of the autocorrelation function RX(τ) is that


it provides a means of describing the “interdependence” of two
random variables obtained by observing a random process
X(t) at times τ seconds apart.

46
5.6 Mean, Correlation and Covariance Functions

◊ Example 5.7 Sinusoidal Signal with Random Phase.


◊ Consider a sinusoidal signal with random phase:
 1
X  t   A cos  2 f c t     ,     
f      2
 0, elsewhere
RX    E  X  t    X  t  
 A2  A2
 E  cos  4 f c t  2 f c  2    E cos  2 f c  
 2  2
A2  1 A2
  cos  4 f c t  2 f c  2  d  cos  2 f c 
2  2 2
A2
 cos  2 f c 
2
47
5.6 Mean, Correlation and Covariance Functions

◊ Averages for joint stochastic processes


◊ Let X(t) and Y(t) denote two stochastic processes and let
Xti≡X(ti), i=1,2,…,n, Yt’j≡Y(t’j), j=1,2,…,m, represent the
random variables at times t1>t2>t3>…>tn, and
t’1>t’2>t’3>…>t’m , respectively. The two processes are
characterized statistically by their joint PDF:

f XY xt1 , xt2 ,..., xtn , yt ' , yt ' ,..., yt '
1 2 m

◊ The cross-correlation function of X(t) and Y(t), denoted by
Rxy(t1,t2), is defined as the joint moment:
 
Rxy (t1 , t2 )  E ( X t1 Yt2 )    xt1 yt2 f XY ( xt1 , yt2 )dxt1 dyt2
 

◊ The cross-covariance is:


Cov( X t1 , Yt2 )  Rxy (t1 , t2 )  mx (t1 )my (t2 )
48
5.6 Mean, Correlation and Covariance Functions

◊ Averages for joint stochastic processes


◊ When the process are jointly and individually stationary, we
have Rxy(t1,t2)=Rxy(t1-t2), and μxy(t1,t2)= μxy(t1-t2):

Rxy ( )  E ( X t1Yt1  )  E ( X t '  Yt ' )  E (Yt ' X t '  )  Ryx ( )


1 1 1 1

◊ The stochastic processes X(t) and Y(t) are said to be


statistically independent if and only if :

f XY ( xt1 , xt2 ,..., xtn , yt ' , yt ' ,..., yt ' )  f X ( xt1 , xt2 ,..., xtn ) fY ( yt ' , yt ' ,..., yt ' )
1 2 m 1 2 m

for all choices of ti and t’i and for all positive integers n and m.
◊ The processes are said to be uncorrelated if
Rxy (t1 , t2 )  E ( X t1 ) E (Yt2 )  Cov( X t1 , Yt2 )  0
49
5.6 Mean, Correlation and Covariance Functions

◊ Example 5.9 Quadrature-Modulated Processes


◊ Consider a pair of quadrature-modulated processes X1(t) and X2(t):
X 1  t   X  t  cos  2 f c t   
X 2  t   X  t  sin  2 f c t   

R12    E  X 1  t  X 2  t    
 E  X  t  X  t    cos  2 f c t    sin  2 f c t  2 f c    
 E  X  t  X  t     E cos  2 f c t    sin  2 f c t  2 f c    
1
 RX   E sin  4 f c t  2 f c  2   sin  2 f c  
2
1
  RX   sin  2 f c  R12  0   E  X 1  t  X 2  t    0
2
50
5.6 Mean, Correlation and Covariance Functions

◊ Ergodic Processes
◊ In many instances, it is difficult or impossible to observe all sample
functions of a random process at a given time.
◊ It is often more convenient to observe a single sample function for a
long period of time.
◊ For a sample function x(t), the time average of the mean value over
an observation period 2T is
1 T
 x ,T   x  t  dt  5.84 
2T  T

◊ For many stochastic processes of interest in communications, the


time averages and ensemble averages are equal, a property known as
ergodicity.
◊ This property implies that whenever an ensemble average is required,
we may estimate it by using a time average.
51
5.7 Transmission of a Random Process Through
a Linear Filter

◊ Suppose that a random process X(t) is applied as input to linear


time-invariant filter of impulse response h(t), producing a new
random process Y(t) at the filter output.

◊ Assume that X(t) is a wide-sense stationary random process.


◊ The mean of the output random process Y(t) is given by

Y  t   E Y  t    E  h  1  X  t   1  d 1 
 

  

=  h  1 E  X  t   1   d 1


  h  1   X  t   1 d 1  5.86 

52
5.7 Transmission of a Random Process Through
a Linear Filter

◊ When the input random process X(t) is wide-sense stationary, the


mean  X  t  is a constant  X , then mean Y  t  is also a constant Y .

Y  t    X  h  1 d 1   X H  0   5.87 


where H(0) is the zero-frequency (dc) response of the system.

◊ The autocorrelation function of the output random process Y(t) is


given by:

RY  t , u   E Y  t  Y  u    E  h  1  X  t   1  d 1  h  2  X  u   2  d 2 
  

   
 
  d 1h  1  d 2 h  2 E  X  t   1  X  u   2  
 
 
  d 1h  1  d 2 h  2  RX  t   1 , u   2 
 

53
5.7 Transmission of a Random Process Through
a Linear Filter

◊ When the input X(t) is a wide-sense stationary random process, the


autocorrelation function of X(t) is only a function of the difference
between the observation times:

 
RY      h   h   R   
1 2 X 1   2 d 1d 2  5.90 
 

◊ If the input to a stable linear time-invariant filter is a wide-sense


stationary random process, then the output of the filter is also a
wide-sense stationary random process.

54
5.8 Power Spectral Density
◊ The Fourier transform of the autocorrelation function RX(τ) is called
the power spectral density SX( f ) of the random process
X(t).

S X  f    RX   exp   j 2 f   d  5.91



RX     S X  f  exp  j 2 f   df  5.92 


◊ Equations (5.91) and (5.92) are basic relations in the theory of


spectral analysis of random processes, and together they constitute
what are usually called the Einstein-Wiener-Khintchine relations.

55
5.8 Power Spectral Density

◊ Properties of the Power Spectral Density



◊ Property 1: S X  0    RX   d  5.93


◊ Proof: Let f =0 in Eq. (5.91)



◊ Property 2: E  X
2
 t    S X  f  df  5.94 
◊ Proof: Let τ =0 in Eq. (5.92) and note that RX(0)=E[X2(t)].

◊ Property 3: S X  f   0 for all f  5.95


◊ Property 4: S X   f   S X  f   5.96 
◊ Proof: From (5.91)
   
S X   f    RX   exp  j 2 f   d   RX   exp   j 2 f   d  S X  f 
 RX    RX    

56
Proof of Eq. (5.95)
It can be shown that (see eq. 5.106) SY  f   S X  f  H  f 
2

 
RY     SY  f  exp  j 2 f   df   S X  f  H  f  exp  j 2 f   df
2

 


RY  0   E Y  t    S X  f  H  f  df  0 for any H  f 
2 2

(5.64)
◊ Suppose we let |H( f )|2=1 for any arbitrarily small interval f1 ≤ f ≤ f2 ,
and H( f )=0 outside this interval. Then, we have:
f2
 S X  f  df  0
f1

This is possible if an only if SX( f )≥0 for all f.

◊ Conclusion: SX( f )≥0 for all f.

57
5.8 Power Spectral Density

◊ Example 5.10 Sinusoidal Signal with Random Phase


◊ Consider the random process X(t)=Acos(2πfc t+Θ), where Θ is a
uniformly distributed random variable over the interval (-π,π).
◊ The autocorrelation function of this random process is given in
Example 5.7: A2
RX    cos  2 f c  (5.74)
2
◊ Taking the Fourier transform of both sides of this relation:
A2
SX  f     f  f c     f  f c   (5.97)
4

58
5.8 Power Spectral Density

◊ Example 5.12 Mixing of a Random Process with a


Sinusoidal Process
◊ A situation that often arises in practice is that of mixing (i.e.,
multiplication) of a WSS random process X(t) with a sinusoidal
signal cos(2πfc t+Θ), where the phase Θ is a random variable that
is uniformly distributed over the interval (0,2π).
◊ Determining the power spectral density of the random process Y(t)
defined by:

Y  f   X  t  cos  2 f c t    (5.101)

◊ We note that random variable Θ is independent of X(t) .

59
5.8 Power Spectral Density

◊ Example 5.12 Mixing of a Random Process with a


Sinusoidal Process (continued)
◊ The autocorrelation function of Y(t) is given by:
RY    E Y  t    Y  t  
 E  X  t    cos  2 f c t  2 f c    X  t  cos  2 f c t    
 E  X  t    X  t   E cos  2 f c t  2 f c    cos  2 f c t    
1
 RX   E cos  2 f c   cos  4 f c t  2 f c  2  
2
1
 RX   cos  2 f c 
2 Fourier transform
1
SY  f    S X  f  f c   S X  f  f c   (5.103)
4
60
5.8 Power Spectral Density
◊ Relation among the Power Spectral Densities of the Input and
Output Random Processes
◊ Let SY( f ) denote the power spectral density of the output random process Y(t)
obtained by passing the random process through a linear filter of transfer
function H( f ).  

 RY      h   h   R      2 d 1d 2  5.90 


SY  f    RY   e  j 2 f  d
  1 2 X 1


  
   h  1 h  2  RX    1   2  e  j 2 f  d 1d 2 d
  

Let    1   2   0
  
h  1 h  2  RX  0  e
 j 2 f  0 1  2 
   d 1d 2 d 0
  
  
  h  1 e  j 2 f  1
d 1  h  2 e j 2 f  2
d 2  RX  0 e  j 2 f  0 d 0
  

 H  f H  f  SX  f   H  f  SX  f   5.106 
 2

61
5.10 Noise
◊ The sources of noise may be external to the system (e.g.,
atmospheric noise, galactic noise, man-made noise), or internal to
the system.

◊ The second category includes an important type of noise that arises


from spontaneous fluctuations of current or voltage in electrical
circuits. This type of noise represents a basic limitation on the
transmission or detection of signals in communication systems
involving the use of electronic devices.

◊ The two most common examples of spontaneous fluctuations in


electrical circuits are shot noise and thermal noise.

62
5.10 Noise
◊ Thermal Noise
◊ Thermal noise is the name given to the electrical noise arising
from the random motion of electrons in a conductor.

◊ The mean-square value of the thermal noise voltage VTN ,


appearing across the terminals of a resistor, measured in a
bandwidth of Δf Hertz, is given by:

E VTN2   4kTRf volts 2

k : Boltzmann’s constant=1.38 ×10-23 joules per degree Kelvin.


T : Absolute temperature in degrees Kelvin.
R: The resistance in ohms.

63
5.10 Noise
◊ White Noise
◊ The noise analysis is customarily based on an idealized form of
noise called white noise, the power spectral density of which is
independent of the operating frequency.
◊ White is used in the sense that white light contains equal
amount of all frequencies within the visible band of
electromagnetic radiation.
◊ We express the power spectral density of white noise, with a
sample function denoted by w(t), as
N0
SW  f  
2
N 0  kTe
The dimensions of N0 are in watts per Hertz, k is Boltzmann’s
constant and Te is the equivalent noise temperature of the receiver.
64
5.10 Noise
◊ White Noise
◊ The equivalent noise temperature of a system is defined as the
temperature at which a noisy resistor has to be maintained such
that, by connecting the resistor to the input of a noiseless
version of the system, it produces the same available noise
power at the output of the system as that produced by all the
sources of noise in the actual system.
◊ The autocorrelation function is the inverse Fourier transform of
the power spectral density:
N0
RW  τ     τ
2
◊ Any two different samples of white noise, no matter how
closely together in time they are taken, are uncorrelated.
◊ If the white noise w(t) is also Gaussian, then the two samples
are statistically independent.
65
5.10 Noise
◊ Example 5.14 Ideal Low-Pass Filtered White Noise
◊ Suppose that a white Gaussian noise w(t) of zero mean and
power spectral density N0/2 is applied to an ideal low-pass filter
of bandwidth B and passband amplitude response of one.
◊ The power spectral density of the noise n(t) is
 N0
 , B  f  B
SN  f    2
 0, f B

◊ The autocorrelation function of n(t) is

N0
B
RN  τ    exp  j 2πfτ  df
B 2

 N 0 B sinc  2Bτ 

66

You might also like