0% found this document useful (0 votes)
114 views58 pages

EE7401 Probability and Random Processes

1) The document is the course outline for EE7401 Probability and Random Processes taught by Professor Anamitra Makur. 2) The course covers topics such as random processes and correlation functions, random processes in linear systems and power spectra, basic applications of random processes, optimum linear systems, ergodicity, and Markov chains and processes. 3) The course outline provides an overview of the topics that will be covered during the semester as well as references and information about assignments and exams.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views58 pages

EE7401 Probability and Random Processes

1) The document is the course outline for EE7401 Probability and Random Processes taught by Professor Anamitra Makur. 2) The course covers topics such as random processes and correlation functions, random processes in linear systems and power spectra, basic applications of random processes, optimum linear systems, ergodicity, and Markov chains and processes. 3) The course outline provides an overview of the topics that will be covered during the semester as well as references and information about assignments and exams.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

EE7401 Probability and Random Processes

Random Processes

Anamitra Makur
Office: S1-b1c-103
Phone: 4013
Email: [email protected]

Anamitra Makur School of Electrical & Electronic Engineering 1

Reference Athanasios Papoulis, (S Unnikrishna Pillai,) Probability, Random


Variables, and Stochastic Processes, McGraw-Hill.
Course Outline
Random process and correlation function
Basic concepts. Statistics of stochastic processes. Definition of autocorrelation function. Properties
of autocorrelation function. Poisson process. Definition and properties of cross correlation function.
Correlation coefficient. White noise process. Normal process. Stationary random process. Wide-
sense stationary process.

Random process in linear system and power spectrum


System with stochastic input. Examples, square law detector. Linear time-invariant system. Time
domain analysis of linear system – input/output relationship between correlation functions.
Definition of power spectrum, relationship between power spectrum and autocorrelation function.
Property of power spectrum. Cross-power spectrum. Existence theorem. Frequency domain analysis
of linear system – input/output relationship between power spectrums. White noise. Hilbert
transform of random process. Wiener-Khinchin theorem. Discrete-time process. Correlation function
related to discrete-time random process. Discrete-time linear time-invariant system. Power spectrum
for discrete-time process. AR(1) process.

Basic application
Random walk and Wiener process. Thermal noise. Shot noise. Modulation, bandlimited process,
sampling expansion.

Anamitra Makur School of Electrical & Electronic Engineering 2


Optimum linear system
Systems that maximize signal-to-noise ratio - Matched filter in the presence of white noise and
colored noise. Systems that minimize mean-square error - Smoothing.

Ergodicity
Time average. Mean ergodic process, Slutsky’s theorem. Discrete-time case. Covariance ergodic
process. Distribution ergodic process. Measurement of power spectrum, autocorrelation estimate of
power spectrum, periodogram estimate.

Markoff Chain and Markoff Process


Discrete-time Markoff Chains, Continuous-time Markoff Chains.

topics in red italics = not for examination purpose (pages marked by )

Anamitra Makur School of Electrical & Electronic Engineering 3

Stochastic Process
Random variable x takes a value for every outcome of an experiment.
Stochastic process x(t ) takes a function of time for every outcome
ζ of an experiment.
x(t )

x(t ) : an ensemble of 2
1

time functions t
3

x(t), t  real axis  continuous-time process


x[n], n  integer axis  discrete-time process

Anamitra Makur School of Electrical & Electronic Engineering 4


Equality x(t )  y (t ) if x(t ,  )  y (t ,  ) t , 
x(t ) and y (t ) are equal in the MS sense if E{| x(t )  y (t ) | }=0 t
2

First Order Statistics of Stochastic Processes

At a given t1, x(t1 ) 1


is a random variable 2
3
t1
For a specific t, F ( x, t )  P{x(t )  x} first order distribution
F ( x, t )
f ( x, t )  first order density
x

mean  (t )  E{x(t )}   xf ( x, t )dx




Anamitra Makur School of Electrical & Electronic Engineering 5

Second Order Statistics of Stochastic Processes


At t1, x(t1 )  y1 is a random variable
At t2, x(t2 )  y 2 is another random variable

second order distribution F ( x1 , x2 ; t1 , t2 )  P{x(t1 )  x1 , x(t2 )  x2 }

 2 F ( x1 , x2 ; t1 , t2 )
second order density f ( x1 , x2 ; t1 , t2 ) 
x1x2

x(t) is in general a complex process, x (t )  a(t )  jb(t )


where a(t) and b(t) are real processes

autocorrelation  
Rxx (t1 , t2 )  E{x(t1 )x (t2 )}   xx
*
1 2 f ( x1 , x2 ; t1 , t2 )dx1dx2
  

Anamitra Makur School of Electrical & Electronic Engineering 6


by definition Rxx (t2 , t1 )  Rxx* (t1 , t2 )

for real processes Rxx (t2 , t1 )  Rxx (t1 , t2 )


for t1  t2  t , Rxx (t , t )  E{| x (t ) |2 }  average power  0

Example 1 x(t ,  i )  r ( i ) cos t  φ( i ) 


r, φ independent real random variables, φ uniform in (–π, π)
x(t)
1
t
2

 (t )  E{x(t )}  E{r}E{cos(t  φ)}

Anamitra Makur School of Electrical & Electronic Engineering 7


1
but E{cos(t  φ)}   cos(t   ) 2 d  0


  (t )  0

Rxx (t1 , t2 )  Ex (t1 )x (t2 )


= E{r2 cos(ωt1 + φ)cos(ωt2 + φ)}
= ½ E{r2}E{cos(ωt1 + ωt2 + 2φ) + cos(ωt1 – ωt2)}
= ½ E{r2}[E{cos(ωt1 + ωt2 + 2φ)} + E{cos(ωt1 – ωt2)}]
but E{cos( (t1  t2 )  2φ)}  0 as before
 12 cos(t1  t2 ) E{r 2 }

average power  Rxx (t , t )  12 E{r 2 }  variance

Anamitra Makur School of Electrical & Electronic Engineering 8


let   t2  t1
then Rxx ( )  12 cos( ) E{r 2 }
autocorrelation depends only on τ

1 E {r 2 } Rxx ( )
2

 
2

First order statistics for r = 1 (or a constant)


F ( x, t )  P{x(t )  x}  P{cos(t  φ)  x}  P{cos y  x}
where y  t  φ is uniform in (t   , t   )

Anamitra Makur School of Electrical & Electronic Engineering 9

Consider a period from 0 to 2π. cos y


1
x
y
Now 2π
cos-1x

2π - cos-1x

P{cos y  x}
 P{cos 1 x  y  2  cos 1 x}

for 0  cos 1 x   ,  1  x  1
2  2 cos 1 x since uniform

2
cos 1 x
 1

Anamitra Makur School of Electrical & Electronic Engineering 10


 0 x  1 1

 1
 F ( x, t )  1  cos x 1  x  1 0.8

 1 1 x
0.6

F(x,t)
0.4

0.2

0
F(x,t) independent of t
-1.5 -1 -0.5 0 0.5 1 1.5
x

2.5

 1 | x |
2
0

f ( x, t )  
1.5
1
| x | 1
f(x,t)
 1

 1  x
2
0.5

-1.5 -1 -0.5 0 0.5 1 1.5


x

Anamitra Makur School of Electrical & Electronic Engineering 11

Poisson Process
Place points t i at random on the entire t axis such that on an
average there are λ points per unit time.

x x x x x x x t
ti

The number of points n(t1 , t2 ) in an interval (t1 , t2 )


is a Poisson random variable

e  t ( t ) k where t  t 2  t1 is interval length


P{n(t1 , t 2 )  k } 
k!
If the intervals (t1 , t2 ) and (t3 , t 4 ) are non-overlapping, then the
random variables n(t1 , t 2 ) and n(t3 , t4 ) are independent.

Anamitra Makur School of Electrical & Electronic Engineering 12


x(t )  n(0, t ) is the Poisson process
Poisson
arrivalst
0 t1 ti
x (t )
1
t
 (t ) = mean of Poisson process
= average number of points in interval t = E x(t )  E n(0, t )  t

Rxx (t , t )  E{x 2 (t )} = average power of Poisson process



ea a k 2
 k for a = λt
k 1 k!

ak 2
 e a  k
k 1 k!

Anamitra Makur School of Electrical & Electronic Engineering 13


ak
using Taylor series expansion, e a  
k  0 k!

d 2e a 
k (k  1)a k  2

da 2 k 1
 k!
 
1 ak 1 ak
 ea 
a2
k2
k 1

k! a 2
k
k 1 k!

ak 
ea a k
 k
but k
k 1 k!
 ea  k
k 1 k!
 ea a
1 2 a 1
 ea  2
a

k 1
k
k! a
 2 ea a

ak
 k2  ea a  ea a 2
k 1 k!
therefore Rxx (t , t )  E{x 2 (t )}  a  a 2  t  2t 2

Anamitra Makur School of Electrical & Electronic Engineering 14


autocorrelation Rxx (t1 , t 2 )  E{x(t1 )x(t 2 )}
 E{x(t1 )[x(t1 )  x(t 2 )  x(t1 )]}
 E{x 2 (t1 )}  E{x(t1 )[x(t 2 )  x(t1 )]}
But x(t1 ) and x(t 2 )  x(t1 ) are independent because the intervals
(0, t1 ) and (t1 , t 2 ) are non-overlapping
 E{x 2 (t1 )}  E{x(t1 )}E{x(t 2 )  x(t1 )}
 t1  2t12  t1   (t2  t1 )  t1  2t1t2
using Rxx (t1 , t2 )  Rxx (t 2 , t1 )
t2  2t1t2 t1  t2
Rxx (t1 , t2 )  
 t1   t1t2 t1  t2
2

or Rxx (t1 , t2 )   min(t1 , t2 )  2t1t2

Anamitra Makur School of Electrical & Electronic Engineering 15

n-th Order Statistics


F ( x1 ,K , xn ; t1 ,K , tn )  P{x(t1 )  x1 ,K , x(tn )  xn }

Second Order Statistics (continued)


Autocorrelation is a positive definite function.
m m
For any a0 a1 L am  ,  a a R i
*
j xx (ti , t j )  0
i 0 j 0
 m 2

Proof: E   ai x (ti )   0 since the argument is non-negative
 i 0 
 
 E  ai x (ti )a *j x* (t j )   0
 i j 
  ai a j E x (ti )x (t j ) 0 from which the result follows.
* *

i j

Anamitra Makur School of Electrical & Electronic Engineering 16


autocovariance C xx (t1 , t2 )  Rxx (t1 , t2 )   (t1 ) * (t2 )

for t1  t2  t , C xx (t , t )  variance of x (t )
C xx (t1 , t2 )
correlation coefficient r (t1 , t2 ) 
C xx (t1 , t1 )C xx (t2 , t2 )

It follows that | r (t1 , t2 ) | 1 and r (t , t )  1

Cross-correlation of two processes x(t) and y(t) is


Rxy (t1 , t2 )  E{x(t1 )y * (t2 )}  R*yx (t2 , t1 )

Anamitra Makur School of Electrical & Electronic Engineering 17

cross-covariance Cxy (t1 , t2 )  Rxy (t1 , t2 )   x (t1 ) *y (t2 )

If Rxy (t1 , t2 )  0 t1 , t2 then x(t) and y(t) are orthogonal


If Cxy (t1 , t2 )  0 t1 , t2 then x(t) and y(t) are uncorrelated

White Noise Process


If Cxx (t1 , t2 )  0 t1  t2 then x(t) is a white noise
[ x(t1) and x(t2) are uncorrelated for every t1  t2 ]

However, Cxx (t , t ) = variance of x(t) ≠ 0 for a nontrivial process

therefore, Cxx (t1 , t2 )  q(t1 ) (t1  t2 ) for some q (t1 )  0


δ(t) = impulse function

Anamitra Makur School of Electrical & Electronic Engineering 18


Normal Process
If x(t1 ), x(t2 ),K , x(tn ) are jointly normal for any n, t1 , t2 ,K , tn
then x(t) is a normal process (real)
It’s first order density is normal with mean η(t), variance Cxx(t,t)

Stationary Process
Strict-Sense Stationary (SSS): statistics does not change with time
f ( x1 ,K , xn ; t1 ,K , tn )  f ( x1 ,K , xn ; t1  c,K , tn  c) c

Thus, first-order statistics is independent of time, f ( x, t )  f ( x)


second-order statistics depends on time difference only,
f ( x1 , x2 ; t1 , t2 )  f ( x1 , x2 ; )   t1  t2

Anamitra Makur School of Electrical & Electronic Engineering 19

Our example 1, x(t )  r cos(t  φ) is a SSS process

Wide-Sense Stationary (WSS): mean, autocorrelation does


not change with time
E{x(t )}   (t )   , constant

E{x (t   )x* (t )}  Rxx (t   , t )  Rxx ( )


Also implies constant average power E{| x(t ) |2 }  Rxx (0)

Two processes x(t ) and y (t ) are called jointly WSS if each is WSS
and Rxy (t   , t )  Rxy ( )

SSS  WSS, but WSS  SSS

Anamitra Makur School of Electrical & Electronic Engineering 20


However, if x(t ) is a normal process, then x(t ) WSS  x(t ) SSS.

Example 1 (contd.)
Show that x(t )  cos(t  φ) with ( )  E{e jφ },  (1)   (2)  0
is WSS. Find E{x(t )} and Rxx ( ) .

Solution: E{x(t )}  E{cos(t  φ)}  E{Re[e j (t φ ) ]}


 Re[e jt E{e jφ }]  Re[e jt (1)]  0
hence, mean independent of time
Rxx ( )  E{x(t )x(t   )}  E{cos(t  φ) cos(t  φ   )}
1
 E{cos(2t  2φ   )  cos( )}
2
Anamitra Makur School of Electrical & Electronic Engineering 21

1 1
 E{cos(2t  2φ   )}  cos( )
2 2
1 1
 Re e2 jt  j E{e2 jφ }  cos( )
2 2
1 1 1
 Re e 2 jt  j   2    cos( )  0  cos( )
2 2 2

These answers match with earlier answers taking E (r 2 )  1 since r  1

This is because, if φ is uniform in ( ,  ), then



1 sin( )
()   e j d 

2 
and  (1)   (2)  0 since sin( )  sin(2 )  0

Anamitra Makur School of Electrical & Electronic Engineering 22


Example 2:
If x(t ) is normal process with zero mean and y (t )  Ie ax ( t )
Then  y (t )  E y (t )  E  Ieax (t )    Ie ax f ( x, t )dx



1
e  x / 2 Rxx ( t ,t )
2
But f ( x, t ) 
2Rxx (t , t )

since variance = Rxx (t , t ) because mean is zero,



1
 y (t )   Ie e  x / 2 Rxx ( t ,t ) dx
2
ax
So
 2Rxx (t , t )

1

1 2
e [ x aRxx ( t ,t )] / 2 Rxx ( t ,t ) e 2 xx dx
2
I
a R ( t ,t )

 2Rxx (t , t )

Anamitra Makur School of Electrical & Electronic Engineering 23

1 2
 Ie 2
a Rxx ( t ,t )

1 2
If x(t ) is stationary, then Rxx (t1 , t2 )  Rxx ( ) , and  y  Ie 2
a Rxx ( 0 )

For x(t ) stationary, we will now find R yy (t1 , t2 ) .


Since it is a memoryless system, y (t ) is SSS since x(t ) is.
Therefore, R yy (t1 , t2 )  R yy ( )
( x12  2 rx1 x2  x22 )

1 2 (1 r 2 ) Rxx ( 0 )
Now, f ( x1 , x2 ; )  e
2Rxx (0) 1  r 2
since it is a zero-mean joint normal density with equal variance Rxx (0)

Here, r = correlation coefficient = Rxx ( ) / Rxx (0) since zero-mean

Anamitra Makur School of Electrical & Electronic Engineering 24


So R yy ( )  E y (t ) y (t   )  E I 2e a [ x ( t ) x ( t  )] 
 

 I f ( x1 , x2 ; )dx1dx2
2 a ( x1  x2 )
 e
 
  x12  2 rx1 x2  x22
a ( x1  x2 ) 
I2
  2R
2 (1 r 2 ) Rxx ( 0 )
 e dx1dx2
xx ( 0) 1  r
2
  

  ( x1  ) 2  2 r ( x1  )( x2  )  ( x2  )2 2
  a Rxx ( 0 )(1 r )
I2
  2R
2 (1 r 2 ) Rxx ( 0 )
 e dx1dx2
xx ( 0) 1  r
2
  

for   aRxx (0)(1  r )


[ Rxx ( 0 )  Rxx ( )]
 I 2e a Rxx ( 0 )(1 r )  I 2e a
2 2

Anamitra Makur School of Electrical & Electronic Engineering 25

System with Stochastic Input


y (t )
y (t )  T  x(t )
x(t ) Deterministic system
T 
T is deterministic

If y (t )  g  x(t )  , then output y (t ) depends only on the present


input x(t )  memoryless system
Since y (t1 )  g  x(t1 )  ,L, y (tn )  g  x(tn )  , f 2 ( y1 ,L , yn ; t1 ,L, tn ) can
be determined from f1 ( x1 ,L, xn ; t1 ,L, tn ) using earlier methods:

If y1  g1  x1 ,.., x n  ,L, y n  g n  x1 ,.., x n 

and random variables x1 ,.., x n have density f x ( x1 ,.., xn )


then density of y1 ,.., y n at values y1,.., yn may be found as follows.

Anamitra Makur School of Electrical & Electronic Engineering 26


If g1  x1 ,.., xn   y1',L, g n  x1 ,.., xn   yn' has one solution x1,.., xn
f ( x,.., xn )
then f y ( y1,.., yn )  x 1
| J ( x1,.., xn ) |

where J is the jacobian of the transformation.


If there are several solutions, add the corresponding terms.

[for systems with memory, this task is very complex]

For a memoryless system, if x(t ) is SSS, then y (t ) is also SSS.


However, if x(t ) is WSS, then y (t ) may or may not be WSS.

Anamitra Makur School of Electrical & Electronic Engineering 27

Example 3: Square Law Detector


y (t )  x 2 (t ) (real)
For y > 0, y  x 2 has two solutions x   y
dy
jacobian  2 y
dx
1  First order
 f y ( y; t )   f x ( y ; t )  f x ( y ; t ) 
2 y density
(no solution for y < 0)
If x(t) is SSS, then f x ( x; t )  f x ( x)  f y ( y; t )  f y ( y )
For example, take x(t) to be a normal stationary
zero mean process,
1
e  x / 2 Rxx ( 0 )
2
f x ( x) 
2Rxx (0)

Anamitra Makur School of Electrical & Electronic Engineering 28


1
 f y ( y)  e  y / 2 Rxx ( 0)  U ( y )
2Rxx (0) y
U(y) = unit step function
E{y (t )}  Rxx (0)

Now for y1  0, y2  0, the system y1  x12 , y2  x22


has four solutions  y1 ,  y2  
y1 y1
x1 x2
jacobian  4 y1 y2
y2 y2
x1 x2
1
 f y ( y1 , y2 ; t1 , t2 ) 
4 y1 y2
f x ( y1 ,  y2 ; t1 , t2 ) Second order
density

Anamitra Makur School of Electrical & Electronic Engineering 29

If x(t) is SSS, then


f x ( x1 , x2 ; t1 , t2 )  f x ( x1 , x2 ; )
 f y ( y1 , y2 ; t1 , t2 )  f y ( y1 , y2 ; )

For x(t) to be a normal stationary zero mean process, the second


order density may be similarly found.

Since x(t   ) and x(t ) are jointly normal zero mean,


R yy ( )  E{x 2 (t   )x 2 (t )}  E{x 2 (t   )}E{x 2 (t )}  2 E 2 {x(t   )x(t )}
 Rxx2 (0)  2 Rxx2 ( )

E{y 2 (t )}  R yy (0)  3Rxx2 (0)   y2  2 Rxx2 (0)

Anamitra Makur School of Electrical & Electronic Engineering 30


Linear Systems
y (t )
y (t )  L  x(t )
x(t )
L

Superposition holds L a1x1 (t )  a 2 x 2 (t )   a1 L  x1 (t )   a 2 L  x 2 (t )

for any a1 , a 2 , x1 (t ), x 2 (t )

Time-invariant if L  x(t  c)  y (t  c), c

h(t ) = impulse response of a linear (LTI) system = L  (t ) 



Then y (t )  x(t )  h(t )   x(t   )h( )d


Causal if h(t) = 0 for all t < 0


Anamitra Makur School of Electrical & Electronic Engineering 31

If x(t ) is a normal process, then y (t ) is also a normal process.


If x(t ) is SSS, then y (t ) is also SSS.

If x(t ) is WSS, then x(t ) and y (t ) are jointly WSS:  x , y , Rxx , Ryy , Rxy 
Fundamental theorem:
E  L  x(t )   E  

x(t   )h( )d    

 

xh( ) f ( x, t   )d dx

  h( )   xf ( x, t   )dx  d
 

   

but  xf ( x, t   )dx  E x(t   )   x (t   )


   x (t   )h( )d


 L  x (t )  L  E{x(t )}

Anamitra Makur School of Electrical & Electronic Engineering 32


1. Rxy (t1 , t2 )  L*2 Rxx (t1 , t2 ) where L2   operates on t2

  Rxx (t1 , t2   )h* ( )d


2. Ryy (t1 , t2 )  L1  Rxy (t1 , t2 ) 

Proof: y * (t2 )  L*2 x* (t2 )  


Using superposition, x (t1 ) y* (t2 )  L*2 x(t1 )x* (t2 )  
Therefore E x (t1 ) y * (t2 )  L*2 E x (t1 )x* (t2 )  
or Rxy (t1 , t2 )  L*2 Rxx (t1 , t2 )
(Part 2 similar)

Anamitra Makur School of Electrical & Electronic Engineering 33

Special case: if x(t ) is white noise, then Rxx (t1 , t2 )  q (t1 ) (t1  t2 )
It follows that E{| y (t ) |2 }  q (t ) | h(t ) |2

Example 4: Poisson Impulses


d
z (t )    (t  t i )  x(t )
i dt

x (t ) x(t ) d () z (t ) z (t )

t
dt t
differentiator

d
here L  
dt
Anamitra Makur School of Electrical & Electronic Engineering 34
So Rzz (t1 , t2 )  L1  L2  Rxx (t1 , t2 )

2
 Rxx (t1 , t2 )
t1t2

  2t1  U (t1  t2 ) 
t1 
  2   (t1  t2 )

For   t1  t2 , Rzz ( )   2   ( ) 

S zz ( )  22 ( )    Hence z (t ) at
 least WSS
d  d d 
E{z (t )}  E  x(t )   E{x(t )}  t   
 dt  dt dt 
Anamitra Makur School of Electrical & Electronic Engineering 35

Example 5: (non-stationary case)


c is uniform in (0,T)
Find autocorrelation if (i) x(t )  U (t  c) , (ii) y (t )   (t  c)
(i) x (t )

0 c T t

Rxx (t1 , t2 )  E U (t1  c)U (t2  c)


  U (t1  c)U (t2  c) f (c)dc f(c) = density of c
T
1
  U (t1  c)U (t2  c) dc
0
T U (t  c )
0 c  t
but U (t  c)  
1 c  t t c

Anamitra Makur School of Electrical & Electronic Engineering 36


0 c  min(t1 , t2 )
 U (t1  c)U (t2  c)  
1 c  min(t1 , t2 ) t1 t2 c

Rxx (t1 , t2 )


 0 min(t1 , t2 )  0  0 min(t1 , t2 )  0
 1 2 1
min( t ,t ) t / T 0  min(t1 , t2 )  T , t1  t2
1
  dc 0  min(t1 , t2 )  T  
 0 T t2 / T 0  min(t1 , t2 )  T , t2  t1
 T1  1 T  min(t1 , t2 )
  dc T  min(t1 , t2 )
 0 T

Anamitra Makur School of Electrical & Electronic Engineering 37

y (t )
(ii)
0 c T t

Ryy (t1 , t2 )  E  (t1  c) (t2  c)


T
1
   (t1  c) (t2  c) dc
0
T

  (t 1  c) (t2  c)dc   (t1  t2 )

 (t  t ) / T 0  t1 , t2  T
 1 2
 0 else

Anamitra Makur School of Electrical & Electronic Engineering 38


alternate approach:

Rxx (t1 , t2 ) 

t1
U (t1 )  U (t1  T )U (t2  t1 )
T
0  t1  T t1  t 2


t2
U (t2 )  U (t2  T )1  U (t2  t1 )  U (t1  T )U (t 2  T )
T
0  t2  T t 2  t1 T  t1 ,t 2
d 2
y (t )  x(t ) Ryy (t1 , t2 )  Rxx (t1 , t2 )
dt t1t2
Anamitra Makur School of Electrical & Electronic Engineering 39


Rxx (t1 , t2 )
t1
  t1
  U (t1 )  U (t1  T )U (t2  t1 )
t1  T 
  t2
  U (t2 )  U (t2  T )1  U (t2  t1 )   U (t1  T )U (t2  T )
t1  T  t1


1
U (t1 )  U (t1  T )U (t2  t1 )  t1  (t1 )   (t1  T )U (t2  t1 )
T T
 1 U (t1 )  U (t1  T ) (t 2  t1 )
t
cancels each other
T since t1 = t2
 U (t 2 )  U (t 2  T ) (t 2  t1 )   (t1  T )U (t2  T )
t2
T
Anamitra Makur School of Electrical & Electronic Engineering 40

1
U (t1 )  U (t1  T )U (t2  t1 )  t1  (t1 )U (t2  t1 )
T T
t1 zero since t1 = 0
  (t1  T )U (t 2  t1 )   (t1  T )U (t 2  T )
T
cancels each other
since t1 = T
 U (t1 )  U (t1  T )U (t 2  t1 )
1
T

 1
U (t1 )  U (t1  T )U (t2  t1 )
t2 T


1
U (t1 )  U (t1  T ) (t 2  t1 ) which is same as before
T

Anamitra Makur School of Electrical & Electronic Engineering 41

The relationship between cross-correlations may be used to


express moments of the output in terms of the moments of the
input.

For example, to find Ryyy (t1 , t2 , t3 ) : (real case)

E  x(t1 )x(t2 )y (t3 )  L3  E{x(t1 )x(t2 )x(t3 )}  L3  Rxxx (t1 , t2 , t3 )

E  x(t1 )y (t2 ) y (t3 )  L2  E{x(t1 )x(t2 )y (t3 )}  L2  Rxxy (t1 , t2 , t3 ) 

Ryyy (t1 , t2 , t3 )  E y (t1 )y (t2 )y (t3 )  L1  E{x(t1 )y (t2 )y (t3 )}

 L1  Rxyy (t1 , t2 , t3 )   L1  L2  L3  Rxxx (t1 , t2 , t3 )  

Anamitra Makur School of Electrical & Electronic Engineering 42


Power Spectrum (Power Spectral Density, psd)
Fourier transform of the autocorrelation of a WSS stochastic
process
S xx ( )   Rxx ( )e  j d where Rxx ( )  E x(t   )x* (t )



Rxx ( ) conjugate symmetric  S xx ( ) real


 d
Rxx ( )   S xx ( )e j
 2
If x(t ) is real  Rxx ( ) real symmetric  S xx ( ) real symmetric

Similarly, cross-power spectrum S xy ( )   Rxy ( )e  j d


Anamitra Makur School of Electrical & Electronic Engineering 43

Existence theorem:
If ( ) = , ω has density fω ( ) and φ is uniform in –π,π
then ( ) = { ( + ) ∗ ( )} = { }
  d
 a 2  e j fω ( )d     2 a 2 f ω ( )  e j
  2
 S xx ( )  2a 2 f ω ( ) where a 2  Rxx (0)
S ( )
Thus, choosing fω ( )  for some S ( )  0, 
2 Rxx (0)
and ( ) = would make S xx ( )  S ( )

For real processes, choosing f ω ( )  S ( ) /  Rxx (0) and


. ( ) = cos( + ) would make S xx ( )  S ( )

Anamitra Makur School of Electrical & Electronic Engineering 44


If a WSS process x(t ) , input to a linear system with impulse
response h(t ) , gives y (t ) as a WSS output process, then

x(t   )[y (t )]  x(t   )   x(t   )h( ) d 

*
  

Rxy ( )   Rxx (   )h* ( )d


 Rxx ( ) * h* ( )

Therefore S xy ( )  S xx ( ) H * ( ) where H ( )   h(t )e  jt dt


Similarly Ryy ( )  Rxy ( )  h( )  S yy ( )  S xy ( ) H ( )

 S xx ( ) H ( )
2

Anamitra Makur School of Electrical & Electronic Engineering 45

Pictorial representation: (2nd edition, fig.10-5, p.273)

Rxx ( ) h* ( ) Rxy ( )  Rxx ( ) * h ( ) h( ) Ryy ( )  Rxy ( ) * h( )


*

S xx ( ) H * ( ) S xy ( )  S xx ( ) H * ( ) H ( ) S yy ( )  S xy ( ) H ( )

If x(t ) is white, then Rxx ( )  q ( ) impulse

 S xx ( )  q flat

 S yy ( )  q H ( )
2

Anamitra Makur School of Electrical & Electronic Engineering 46


Example 6: Hilbert Transform
 j   0
Quadrature filter H ( )   j sgn    allpass
 j 0 −90° phase shift
1
impulse response h(t ) 
t
(
x(t ) x(t ) x(t ) real process
H ( )
( 1
x(t )  x(t ) 
Hilbert transform t
Example 6a: x(t )  a cos 0t  b sin 0t (2nd edition, ex.10-16, p.284)
(
 x(t )  a cos(0t  90o )  b sin(0t  90o )
 a sin 0t  b cos 0t
Anamitra Makur School of Electrical & Electronic Engineering 47

S xx( ( )  j sgn   S xx ( )

S xx(( ( )   j sgn   j sgn   S xx ( )  S xx ( ) since sgn 2   1


(
Analytic signal (complex process) z (t )  x(t )  jx(t )

Example 6a (contd.): (2nd edition, ex.10-16, p.284)


x(t )  a cos 0t  b sin 0t
 z (t )  (a  jb)e j0t

x(t ) z (t )
2U ( )

Frequency response  1  j ( j sgn  )  2U ( )

Anamitra Makur School of Electrical & Electronic Engineering 48


S zz ( )  4 S xx ( )U ( ) since U 2 ( )  U ( )

 2S xx ( )  2 jS xx( ( )

since S xx( ( )  S xx*( ( )   j sgn   S xx ( )

 Rzz ( )  2 Rxx ( )  2 jRxx( ( )

Wiener-Khinchin theorem:
 d
E{| x (t ) |2 }  Rxx (0)   S xx ( ) 0
 2
It follows that Sxx(ω) ≥ 0

Anamitra Makur School of Electrical & Electronic Engineering 49

Property of correlation:
 d
Now Rxx ( )   S xx ( )e j
 2
 d  d
| Rxx ( ) |  | S xx ( )e j |   S xx ( )  Rxx (0)
 2  2
Thus, | Rxx ( ) | Rxx (0) or Rxx ( ) is maximum at the origin.

If a process has other maxima, Rxx ( 1 )  Rxx (0) for  1  0 , then

| Rxx (   1 )  Rxx ( ) |2  E [x(t     1 )  x(t   )]x* (t )


2

 E | x(t     1 )  x (t   ) |2 E | x(t ) |2 

Anamitra Makur School of Electrical & Electronic Engineering 50


 E | x(t     1 ) |2 x (t     1 )x* (t   )  x* (t     1 )x (t   )
 | x (t   ) |2 E | x(t ) |2 
 [ Rxx (0)  Rxx ( 1 )  R ( 1 )  Rxx (0)]Rxx (0)
*
xx
but Rxx (0)  real, so Rxx* ( 1 )  Rxx (0)
=0
or | Rxx (   1 )  Rxx ( ) |2  0
or Rxx (   1 )  Rxx ( ), 

or Rxx ( ) periodic with period  1

Further, | Rxy ( ) |  Rxx (0) R yy (0)


2

because | Rxy ( ) | | E{x(t   ) y (t )} |  E{| x (t   ) | }E{| y (t ) | }


2 * 2 2 2

 Rxx (0) Ryy (0)


Anamitra Makur School of Electrical & Electronic Engineering 51

Discrete-Time (Digital) Processes


x[n]
mean [n]  E{x[n]}
autocorrelation Rxx [n1 , n2 ]  E{x[n1 ]x [n2 ]}
*

m-th order statistics f ( x1 ,L , xm ; n1 ,L , nm )

SSS f ( x1 ,L , xm ; n1 ,L , nm )  f ( x1 ,L, xm ; n1  M ,L, nm  M ), M


WSS [n ]   , Rxx [n1 , n2 ]  Rxx [n1  n2 ]  Rxx [m], m  n1  n2

x[n] is white noise if x[n1 ] and x[n2 ] are uncorrelated for any n1  n2
1, n  0
Therefore Rxx [n1 , n2 ]  q[n1 ] [n1  n2 ] , where  [n]  
0, n  0
Anamitra Makur School of Electrical & Electronic Engineering 52
If x[n] and y[n] are input and output to a linear system, then

y[n]  x[n]  h[n]   x[n  k ]h[k ]
k 
where h[n] is the impulse response of the system, h[n]  L  [n]

All earlier results are valid for discrete-time cases.

Power spectrum is discrete-time Fourier transform of


autocorrelation, 
S xx ( )   Rxx [m ]e  jm
m  

S xx ( ) is periodic with period 2 , and S xx ( )  0.



d
Rxx [m] 

 S xx ( )e jm
2

Anamitra Makur School of Electrical & Electronic Engineering 53

Types of Power Spectrum


Continuous-time process: power spectrum Rxx ( )  S xx ( )
FT

cross power spectrum Rxy ( ) 


FT
S xy ( )
covariance spectrum C xx ( ) 
FT
S xxc ( )
Laplace transform version: Rxx ( )  S xx ( s )
LT

S xx ( ) and S xx ( s ) are different. On imaginary axis s  j


 S xx ( j )  S xx ( )
Linear system:
impulse resp. freq. resp. causal transfer function
h(t ) 
FT
H ( ) h (t ) 
LT
H( s ), H( j )  H ( )
S xy ( )  S xx ( ) H * ( ) S xy ( s )  S xx ( s )H(  s )
h(t) real
S yy ( )  S xx ( ) | H ( ) |2 S yy ( s )  S xx ( s )H( s )H(  s )

Anamitra Makur School of Electrical & Electronic Engineering 54


Discrete-time process: power spectrum Rxx [m ]  S xx ( )
DTFT

 S xy ( )
cross power spectrum Rxy [m] DTFT
 S xxc ( )
covariance spectrum C xx [m] DTFT
DTFT is a special case of z-transform (on the unit circle)

z-transform version: Rxx [m ] 
zT
S xx ( z )  R
m  
xx [m ]z  m
S xx ( ) and S xx ( z ) are different. On unit circle z  e j
 S xx ( e j )  S xx ( )
Linear system:
impulse resp. freq. response transfer function
h[n ]  H ( )
DTFT
h[n ] 
zT
H( z ), H( e j )  H ( )
S xy ( )  S xx ( ) H * ( ) S xy ( z )  S xx ( z )H( z 1 )
h[n] real
S yy ( )  S xx ( ) | H ( ) |2 S yy ( z )  S xx ( z )H( z )H( z 1 )

Anamitra Makur School of Electrical & Electronic Engineering 55

Example 7: AR(1) Process

real white
noise x[n] y[n]
h[n] y[n]  x[n]  ay[n  1]

a n , n  0 1
h[n]    H( z)  a  1 for stability
0, n  0 1  az 1

1 S xx (e j )
Then S yy (e j )  S xx (e j ) 
1  ae 1  ae 
 j j
1  2a cos   a 2

Since the excitation x[n] is white noise, S xx (e j )  q (WSS)

Anamitra Makur School of Electrical & Electronic Engineering 56


q a0 a0
S yy (e j ) 
1  2a cos   a 2
0  0 

q q  1 az 
or S yy ( z )   1  az 1  1  az 
1  az  1  az 
1
1  a2
 
1
Now
1  az 1
 
k 0
a k k
z  
k 0
a z k
k

  1 1
az 1
  1   al z l  1   al z l   ak z  k   a z  k
k
Also
1  az 1  az l 0 l 1 k  k 

q
Thus S yy ( z ) 
1 a2
a
k 
k
z k

Anamitra Makur School of Electrical & Electronic Engineering 57


But S yy ( z )  R
k 
yy [k ]z  k

q
Therefore Ryy [m]  am
1 a 2

a0 a0
m m

q
Ryy [0]  E{y 2 [n]}  where q  Rxx [0]  E{x 2 [n]}
1  a2

Anamitra Makur School of Electrical & Electronic Engineering 58


Random Walk, Wiener Process
A fair coin is tossed once every T seconds, and a step of (real)
length s is taken for heads, or length –s for tails.

x(t)
x(t) is a stochastic process
called the random walk s
t
T
h h h t t t h t t

Starting at 0, x(nT) will have value ms = (k – (n – k))s if


there are k heads and n – k tails.
n
 P{x(nT )  ms}   0.5k 0.5n  k
k 

Anamitra Makur School of Electrical & Electronic Engineering 59

 n  k n k 1
  p q  e ( k  np ) / 2 npq
2
For large n and k ≈ np ,
k  2npq
1
 P{x( nT )  ms}  e ( m / 2 ) /( 2 n / 4 ) since m  2k  n
2

2 (n / 4)
which is like a normal density in m/2 with mean 0, variance n/4
m/2 (m / 2) / n / 4
therefore P{x(nT )  ms}   N (0, n / 4) 

 N (0,1)


Each step is independent with mean 0 and variance s 2


x(nT) is the sum of n such steps
 E{x(nT )}  0, E{x 2 (nT )}  ns 2
Anamitra Makur School of Electrical & Electronic Engineering 60
Let T→0 and s 2  T
Then the discrete-state process x(t) becomes a continuous-state
process called the Wiener process: w (t )  lim x(t )
T 0
w(t)

Substituting w = ms and t = nT , or

m m w/ s w
  
2 n/4 n t / s t

Anamitra Makur School of Electrical & Electronic Engineering 61

w / t
P{w (t )  w}   N (0,1)


1
Or, the first-order density of w(t) is f ( w, t )  e  w / 2t
2

(normal with mean 0, variance αt) 2t

autocorrelation Rww (t1 , t 2 )  E{w (t1 )w (t 2 )}


 E{w (t1 )[w (t1 )  w (t 2 )  w (t1 )]}
 E{w 2 (t1 )}  E{w (t1 )[w (t 2 )  w (t1 )]}

Anamitra Makur School of Electrical & Electronic Engineering 62


Let t1  t2 , then w (t1 ) and w (t 2 )  w (t1 ) are independent

 E{w (t1 )[w (t 2 )  w (t1 )]}  E{w (t1 )}E{w (t2 )  w (t1 )}
which is 0 since E{w (t )}  0

t1s 2
Therefore in this case Rww (t1 , t2 )  E{w 2 (t1 )}   t1
T

Similarly, it may be shown that for t1  t2 , Rww (t1 , t 2 )  t 2

Therefore, Rww (t1 , t2 )   min(t1 , t 2 )

Anamitra Makur School of Electrical & Electronic Engineering 63

Thermal Noise
Noise due to thermal agitation of atoms above 0oK
Reactive elements assumed noiseless
Resistors replaced by noiseless resistor in parallel with a current
source

ni (t ) R
ni (t ) is a normal process, zero-mean, white
2kT k Boltzmann constant
Sni ni ( ) 
R T Temperature in oKelvin

Noise source of one resistor independent of that of another resistor

Anamitra Makur School of Electrical & Electronic Engineering 64


Let there be a passive network of R, L and C
Z( s )
a
v (t ) = voltage across a and b due to ni (t )
n1 v (t )
Z( s ) = impedance between a and b
b
Then consider separating R :
I ( )
a
V ( ) R n2
b
n1
n2 is the network without R, consisting only of reactive components.

n1 is a linear system with frequency response H ( ) .

Anamitra Makur School of Electrical & Electronic Engineering 65

V ( ) output voltage
H ( )   in reverse mode
I ( ) input current

If I ( ) is applied between a and b, then input power


= | I ( ) | Re{Z( j )}, since Z( s ) is the impedance between a and b.
2

Since n2 does not dissipate any power, power dissipated is across


R, which is equal to V ( ) R
2

V ( )
2

I ( ) Re Z( j ) 
2
Therefore
R

H ( )  R  Re Z( j )
2
or

Anamitra Makur School of Electrical & Electronic Engineering 66


2kT
Now, for network n1 , input process (current) has psd Sni ni ( ) 
R
Output process (voltage) has psd
ni (t ) n1 v(t ) S ( )  S ( ) | H ( ) |2  2kT Re{Z( j )}
vv ni ni

 kT  Z( j )  Z ( j ) 
but Z ( j )  Z( j )
 kT  Z( j )  Z( j )
Therefore Rvv ( )  kT  z ( )  kT  z (  )
since z (t ) is inverse transform of Z( s ),
then z ( t ) is inverse transform of Z( s ) .

But, z (t ) valid for t  0 , z (t )  0 for t  0


So Rvv ( )  kT  z ( ),   0
Anamitra Makur School of Electrical & Electronic Engineering 67

Shot Noise
s(t )   h(t  t i ) where h(t) is a real function, t i are a set of
i
Poisson points with average density 
s(t ) is shot noise.

s(t ) is output of a linear system driven by a train of Poisson impulses,


 
s(t )  L    t  t i   , where L has impulse response h(t).
 i 

Since Poisson impulses are SSS, s(t ) is also SSS.


Anamitra Makur School of Electrical & Electronic Engineering 68
Shot noise is observed as the output of a system activated by a
random sequence of impulses such as particle emissions.
E{s(t )}  L  E{z (t )}  L   

   h(t )dt   H (0)


S zz ( )  2  ( )  
2
(earlier result, Poisson impulses)
S ss ( )  S zz ( ) H ( )
2
Therefore
 2 2 H ( )  ( )   H ( )
2 2

 2 2 H 2 (0) ( )   H ( )
2

Neglecting dc term, S ss ( ) has the shape of H ( ) .


2

Anamitra Makur School of Electrical & Electronic Engineering 69

Let  ( ) be inverse transform of H ( )


2

Since H ( )  H ( ) H  ( ) , and inverse transform of H ( ) is h(t),


2

inverse transform of H  ( ) is h (t ) ,


 ( )  h( )  h ( )  h( )  h( ) since h(t) is real

Then Rss ( )   2 H 2 (0)   ( )

variance  s2  Rss (0)   s2   2 H 2 (0)   (0)   2 H 2 (0)   (0)


 
but  (0)   h(t )h  0  (t ) dt   h (t )dt
 2


so  s2    h 2 (t )dt


Anamitra Makur School of Electrical & Electronic Engineering 70


Modulation
Re  w (t )   a(t )
Modulating process: w (t )  r (t )e jφ ( t ) 
Im  w (t )   b(t )
Modulated process: z (t )  w (t )e j0t
where e j0t = (complex) sinusoid (carrier)

Demodulation: w (t )  z (t )e j0t

Note:
Re  z (t )   x(t )  a(t ) cos(0t )  b(t )sin(0t )  r (t ) cos 0t  φ(t )
Im  z (t )   y (t )  b(t )cos(0t )  a(t )sin(0t )  r (t )sin 0t  φ(t ) 

Amplitude modulation by r (t ) , phase modulation by φ(t )

Anamitra Makur School of Electrical & Electronic Engineering 71

Consider x(t ), E{x(t )}  E{a(t )}cos(0t )  E{b(t )}sin(0t )


 0 since a(t ), b(t ) are zero-mean, jointly WSS

E{x(t   )x(t )}  E a(t   )cos 0 (t   )   b(t   )sin 0 (t   )  
 a (t )cos(0t )  b (t )sin(0t )
  cos 0 (2t   )   cos(0 ) 
 E a(t   )a(t )  
  2 
 sin 0 (2t   )   sin(0 ) 
a(t   )b(t )  
 2 
 sin 0 (2t   )   sin(0 ) 
a(t )b(t   )  
 2 
  cos 0 (2t   )   cos(0 )  
b(t )b(t   )  
 2  
Anamitra Makur School of Electrical & Electronic Engineering 72
1 1
  Raa ( )  Rbb ( ) cos(0 )   Rab ( )  Rba ( ) sin(0 )
2 2
1 1
  Raa ( )  Rbb ( ) cos 0 (2t   )    Rab ( )  Rba ( )  sin 0 (2t   )
2 2

x(t ) is WSS if Rxx (t1 , t2 )  Rxx ( )


c
Raa ( )  Rbb ( ) and Rab ( )   Rba ( )

x(t ) WSS  y (t ) WSS


Rxx ( )  Ryy ( )  Raa ( )cos(0 )  Rab ( )sin(0 )
Rxy ( )   Ryx ( )  Rab ( ) cos(0 )  Raa ( )sin(0 )

Anamitra Makur School of Electrical & Electronic Engineering 73

Rww ( )  E a(t   )  jb(t   ) a(t )  jb(t ) 


 Raa ( )  Rbb ( )  jRab ( )  jRba ( )

 2 Raa ( )  2 jRab ( )
Rzz ( )  E{w (t   )e j0 ( t  ) w  (t )e  j0t }
 Rww ( )e j0

For psd, S ww ( )  2S aa ( )  2 jSab ( ) and S zz ( )  S ww (  0 )

Bandlimited Process
x(t ) is bandlimited if Rxx (0)   (finite power)
and S xx ( )  0 for |  |  (limited spectral width)

Anamitra Makur School of Electrical & Electronic Engineering 74


1) A bandlimited process may be sampled:
x(t   )  L  x(t )  where H ( )  e j
But e j is a continuous function, hence may be expanded using
Taylor series,
1  n j n
   n
1
H ( )        j      j 
n n n
e
n 0 n ! 
n
 0 n 0 n ! n0 n!
d
If x(t )  X ( ) then x(t )  j X ( )
dt

n
Therefore L  x(t )   x ( n ) (t ) , since x (t )  L1  x(t ) with
(1)

n 0 n!
H1 ( )  j , differentiator

 n
Thus x(t   )   x( n ) (t )
n 0 n!
Anamitra Makur School of Electrical & Electronic Engineering 75

x ( n ) (t ) exists for all n



d since bandlimited
Rxx ( )   S

xx ( )e j
2 also  S xx ( )d is limited since Rxx(0) is.
 
j d j d
So, Rxx( n ) ( ) 
n
 
  n S xx ( )e 2   ( j ) S xx ( )e 2
n

Thus, all derivatives of Rxx ( ) exist  all derivatives of x(t ) exist.

Therefore, knowing x( n ) (t ) for, say t = 0, for all n is sufficient to


construct x(t ) for all t. Thus, a countably infinite values completely
specify the bandlimited process.

2) A bandlimited process is continuous and smooth:


E | x(t   )  x(t ) |2   Rxx (0)  Rxx ( )  Rxx (  )  Rxx (0)

Anamitra Makur School of Electrical & Electronic Engineering 76


 
d d
 S  2  S xx ( )[1  cos( )]
j  j
 ( )[2  e e ]
2 2
xx
 


   d
 2  S xx ( )  2 sin 2  
  2  2

  2 2  d         
2 2
 4  S xx ( )  since sin   or sin 2  
  4  2  2  2  2  4
and S xx ( )  0

d
  S

xx ( ) 2 2
2
  2 2 Rxx (0)

Thus E | x(t   )  x(t ) |    Rxx (0) , or signal does not change


2 2 2

much for small 

Anamitra Makur School of Electrical & Electronic Engineering 77

Sampling Expansion
x(t   )  x(t )  F 1 e j 

Expand e j in a Fourier series in the interval     



2
Then e j  ae
n 
n
jnT 
where T 
2
is fundamental period.

1

sin  (  nT )
an 
2 

e j e  jnT  d  
 (  nT )

Since x(t ) is bandlimited to 


  sin  (  nT )  jnT 
x(t   )  x(t )  F 1   e 
 n  (  nT ) 

Anamitra Makur School of Electrical & Electronic Engineering 78



sin  (  nT ) 
   x(t )  F
n 
1
[e jnT ]
 (  nT )

sin  (  nT ) 
  x(t  nT )
n   (  nT )

sin (  nT )
Putting t = 0, x ( )   x(nT ).
n    (  nT )
continuous sampled ideal low-pass filter
process (discrete) process impulse response
2
For any T  , it is still valid, since then e j is expanded for
2
( , ) for some     .

Anamitra Makur School of Electrical & Electronic Engineering 79

Matched Filter (Detecting Signal in Noise)


Received signal x(t )  f (t )  v(t ) , f (t ) = (shifted, scaled) known signal
v (t ) = noise process, WSS, psd Svv ( )
v(t ) (all real processes)
t  t0
h(t )
f (t ) x(t ) y (t )

output y (t )  y f (t )  y v (t )

y f (t )  

f (t   ) h( )d , Y f ( )  F ( ) H ( )

d
 F ( ) H ( )e Similarly, y v (t )  v (t )  h(t )
jt


2
Anamitra Makur School of Electrical & Electronic Engineering 80
2
y f (t0 )
Output sampled at t  t0 , when SNR is r  2

2
E{y v2 (t0 )}
Find h(t) to maximize r
Colored noise: 2
 F ( ) jt 
 
2 

jt0 d d
 F ( ) H ( )e 2  Svv ( ) e 0  Svv ( ) H ( ) 2
  
r 
2

E{y 2v (t0 )} E{y 2v (t0 )}
2
F ( )e jt0 d 

2 d
 Svv ( ) 2  Svv ( ) | H ( ) | 2

E{y 2v (t0 )}
 
d d
but E{y 2v (t0 )}  R yv yv (0)   S yv yv ( )

  Svv ( ) | H ( ) |2
2  2
Anamitra Makur School of Electrical & Electronic Engineering 81

*

F ( ) d  F ( ) jt0 
2

So r  
2
, equality if k  e   Svv ( ) H ( )
S ( ) 2
  vv  Svv ( ) 
F ( )  jt0
*
for some constant k, or H ( )  k e
Svv ( )
White noise:
If v(t ) is white noise, Svv ( )  S0 and H ( )  k F  ( )e jt0
or h(t )  k f (t0  t )
Thus, optimum filter impulse response is (scaled, shifted) time-
reversed signal  hence, matched filter.
Colored noise from white noise (using innovations):
(2nd edition, chap.10-5, p.300)

v (s) H1 ( )
x(t ) x1 (t ) y (t )
Anamitra Makur School of Electrical & Electronic Engineering 82
x(t )  f (t )  v (t ) with v(t) colored noise having psd Svv ( )
1
Use a whitening filter  v ( s ) such that Svv ( ) 
| v ( j ) |2

 x1 (t )  f1 (t )  i v (t ) where i v (t ) is the innovation of v (t )


Siviv ( )  1
Maximizing the SNR for input x1 (t )  maximizing the SNR
for input x(t )
Optimum H1 ( )  kF1* ( )e  jt0 where F1* ( )  F * ( )*v ( j )
Cascading with  v ( s ) , optimum filter is
H ( )  H1 ( ) v ( j )  kF * ( ) v* ( j ) v ( j )e  jt0
F * ( )  jt0
k e
Svv ( )
Anamitra Makur School of Electrical & Electronic Engineering 83

Smoothing (Estimating Signal in Noise)


v(t )
f (t ) = unknown signal
h(t ) y (t ) = estimation of f (t )
f (t ) x(t ) y (t )
(all real processes)
x(t ) itself is an estimate of f (t ) . Since v (t ) is zero mean, x(t ) is an
unbiased estimate. However, the variance is large (equals E{v 2 (t )} ).
Since v (t ) is white, SNR may be improved by (weighted) averaging:
T
y (t )   x(t   )h( )d
T
for some weighting function (window)
h(t )  0, T  t  T
 x(t )  h(t ) h(t )  0, otherwise
h( t )  h(t ), symmetric
Anamitra Makur School of Electrical & Electronic Engineering 84
Now, estimator is biased with bias
b  E{y (t )  f (t )}  E{ y f (t )  y v (t )  f (t )}
 y f (t )  f (t )  E{y v (t )}  f (t )  h(t )  f (t )

And estimator variance is  2  E{y 2 (t )}  E 2 {y (t )}  E{y v2 (t )}



since zero mean noise
  q (t   )h ( )d
2

 from slide 34
Now, mean square estimation error


e  E  y (t )  f (t ) 
2
  E  y (t )  f (t )  y (t ) 
f v
2

2
  y f (t )  f (t )   E{y v2 (t )}  b 2   2

Anamitra Makur School of Electrical & Electronic Engineering 85

For large T,  2 is small but b is large.

E{v 2 (t )} e
em b2
Typical behavior
2
T

If f (t ) is known to be bandlimited, then H ( )  0 outside the band


does not introduce a bias but reduces  2. Thus, only in-band noise
matters and out-of-band noise may be rejected by a filter.

Papoulis shows that for slow-varying q(t ) and quadratic function


f (t ) , em is achieved when   2b, and for a parabolic window
h(t) which depends on f (t ).
Anamitra Makur School of Electrical & Electronic Engineering 86
Ergodicity
x(t) is a real stationary process, mean   E{x(t )} ensemble average
T
1
Time average: ηT 
2T  x(t )dt
T

1
2 t
3
time average
ensemble
average
T
1
ηT is a random variable, E ηT    E{x(t )}dt  
2T T
Therefore, ηT is an estimate of  .

Anamitra Makur School of Electrical & Electronic Engineering 87

x(t) is a mean-ergodic process if ηT   as T  .

This is true if  T  0 as T  ,  T2  variance of ηT

Thus, mean-ergodicity  ensemble average may be replaced


by time average
h(t)
x(t ) 1
2T w (t )
–T 0 T
t T
1
w (t ) 
2T  x( )d  L[x(t )]  x(t )  h(t ), η
t T
T  w (0)
ρ(t)
 1  |t | 
 1  | t | 2T
h (t ) * h ( t )   (t )   2T  2T 
* 1/2T
 0 else –2T 0 2T
Anamitra Makur School of Electrical & Electronic Engineering 88
Cww ( )  LL[C xx ( )]  C xx ( ) *  ( )
where C xx ( ) is autocovariance of x(t)
 |  |
2T
1

2T C
 2T
xx (   ) 1 
 2T 
d

 |  |  
2T 2T
1 2 
 T2  Cww (0) 
2T 2T Cxx (  )1  2T d  2T C
0
xx ( ) 1 
 2T 
d

 
2T
1 
Thus, x(t) is mean-ergodic iff 
T 0
C xx ( ) 1 
 2T 
d  0 as T  

T
Slutsky’s Theorem: mean-ergodic  1  Cxx ( )d  0 as T  
T 0
Proof: (  ) mean-ergodic means  T  0 as T  
Anamitra Makur School of Electrical & Electronic Engineering 89

Consider random variables ηT and x(0). They both have mean  .

 1 T 
So E{ ηT    x(0)   }  E    x (t )    x (0)    dt 
 2T T 
T T
E x (t )   x (0)   dt 
1 1

2T T 2T T
C xx (t )dt


but E 2 ηT   x (0)     E ηT    E x(0)      T2C xx (0)
2
 2

T
1
Therefore  C xx ( )d   T C xx (0)  0 as T  
T 0

Anamitra Makur School of Electrical & Electronic Engineering 90


 
2T
1 
()    C xx ( ) 1  d
2

 2T 
T
T 0

   
2 T0 2T
1  1 

T 
0
C xx ( ) 1 
 2T   d   C xx ( ) 1 
T 2T0  2T 
d  I1  I 2

 
2 T0
1 
I1 
T C
0
xx (0) 1 
 2T 
d since | C xx ( ) | C xx (0)

2 T0
1 2T0

T C
0
xx ( 0)  1  d 
T
C xx (0)  0 as T  

Anamitra Makur School of Electrical & Electronic Engineering 91

1
2T
1
2T
 2T 
I2 
2T 2  Cxx ( )(2T   )d  2T 2  2T0
C xx ( )   dt  d
2 T0 t  
 t 
1
2T t  1
2T 2T
2    xx  I dt
 C ( )d  dt  3
2T t  2T0   2T0  2T 2 2 T0 2T0

1
T 2T0 2T t
T 0
now, given C xx ( )d  0 as T  
c c
1 1 1 0
c1 0 c0 0
C xx ( ) d  C xx ( )d for c1  c0

c c
1 1 1 0
c1 c0 c0 0
or C xx ( ) d   C xx ( )d

Anamitra Makur School of Electrical & Electronic Engineering 92


For some given   0 , we can always find a c0 such that
c
1 0
c0 0
C xx ( )d  

c t
1 1 1
c1 c0
then C xx ( )d   , or  C xx ( )d   , c  c0 , t  c
tc
t
So, choosing 2T0  c0 , I 3  C
2 T0
xx ( )d  t

1
2T
  T02 
then I 2   tdt     1  2    as T  
2 2
( 4T 4T0 )
2T 2 2 T0
4T 2  T 

Thus,  T2  I1  I 2   as T  .

Since  is arbitrary,  T  0 as T  .
Anamitra Makur School of Electrical & Electronic Engineering 93


d
now   Cww (0)  S ( )
2 c

2
T ww

 sin(T ) 
 2

where S ww ( )  covariance spectrum of w(t )  S xx ( ) 


c c

 T 

sin 2 (T ) d sin 2 (T  )
so,  T2   S xxc ( ) As T large,  0 for   0

T 
2 2
2 T 
2 2

1
So,  T2  S xxc (0)
 0 as T   if S xxc (0) is finite, i.e., S xxc ( )
2T
does not have an impulse at   0 [S xxc ( ) is continuous at origin].

Discrete-time Process:
1 M
1 2M
 |m| 
ηM  
2 M  1 n M
x[ n ],  2
M  
2 M  1 m  2 M
C xx [m ]1  
 2M  1 
1 M
x[n] is mean-ergodic iff  Cxx [m]  0 as M  
M m 0
Anamitra Makur School of Electrical & Electronic Engineering 94
Covariance-Ergodic Process:
Assume x(t) zero-mean, then time average estimate of C xx ( ) is
T
1
CT ( ) 
2T  z (t )dt
T
where z (t )  x(t   )x(t )
T
1
T 0
x(t) is covariance-ergodic iff C zz ( )d  0 as T  

where C zz ( )  E{x (t     )x (t   )x (t   )x (t )}  C xx2 (  )


If we wish to estimate Cxx(0), then
z(t )  x 2 (t ) and C zz ( )  E{x 2 ( t   )x 2 (t )}  C xx2 ( 0)

If x(t) is a normal process, then Czz ( )  2C xx2 ( ), so the condition


T
becomes 1

T 0
C xx2 ( )d  0 as T  

Anamitra Makur School of Electrical & Electronic Engineering 95

Distribution-Ergodic Process:
1, x(t )  x
Let y (t )  U  x  x(t )  
0, x(t )  x
then E{y (t )}  P{x(t )  x}  F1 ( x)
Thus, F1 ( x) estimated by the time average of y(t):
T
1
2T T
FT ( x)  y (t )dt
T
1
T 0
and x(t) is distribution-ergodic iff C yy ( )d  0 as T  

C yy ( )  E{y (t   )y (t )}  E 2 {y (t )}  P{x(t   )  x, x(t )  x}  F12 ( x)


 F2 ( x, x; )  F12 ( x)
where F2 ( x, x; ) is the second order distribution of x(t).
Anamitra Makur School of Electrical & Electronic Engineering 96
Measurement of Power Spectrum
(Spectral Estimation)
In real life, a real process x(t) is available only from −T to T
x(t ) | t | T
xT (t )  
 0 | t | T
Sxx(ω) can not be estimated directly since it is not an expectation.

Autocorrelation estimate of power spectrum:


Determine Sxx(ω) from the estimate of autocorrelation
      
Rxx ( )  E x  t  x  t   
  2   2 

Anamitra Makur School of Electrical & Electronic Engineering 97

Assume covariance-ergodic
   
T
1
 Rxx ( )   x t  x t  dt
2T T  2   2 

   
xt   xt  
 2  2

 
T T  0 T T t
2 2
 
Integrand available only in the interval T  t T  ,  0
2 2
T  / 2
1      
2T   T  / 2  2   2 
Option 1: RT ( )  x  t   x  t   dt

Estimate is unbiased, but has large variance.


Anamitra Makur School of Electrical & Electronic Engineering 98
T  / 2
1    
Option 2: RT ( ) 
2T  x  t   x  t   dt
 T  / 2 
2  2

smaller integration interval at large |τ|


 estimate is worse at large |τ|
 Rxx(τ) for large |τ| scaled down

Estimate is biased, but has smaller variance.

Periodogram estimate:
Take Fourier transform of the available signal,
T

 x(t )e
 jt
XT ( )  dt
T
1
Then power spectrum is ST ( )  | XT ( ) |2
2T
Anamitra Makur School of Electrical & Electronic Engineering 99

In fact, this is the same as option 2 of autocorrelation estimate:


T  / 2
1    
RT ( ) 
2T  x  t   x  t   dt
T  / 2 
2  2

1    

2T x

T  t   xT  t   dt
 2  2

1

2T x

T (t )xT (t    )dt 

1
 xT ( )  xT ( )
2T
1
Taking Fourier transform, ST ( )  XT ( ) XT ( )
2T

Anamitra Makur School of Electrical & Electronic Engineering 100


Sxx(ω) is nearly constant in an interval of the order 1/T (large T)
 asymptotically unbiased, E{ST ( )}  S xx ( )
Sxx(ω) is not constant in an interval of the order 1/T (small T)
 biased estimate

To reduce the bias, data window is used:


T 2
1
 c(t )x(t )e
 jt
S c ( )  dt
2T T

c(t) = data window, with Fourier transform C(ω)


1
 bias E{S c ( )}  S xx ( ) * C 2 ( )
4T

Anamitra Makur School of Electrical & Electronic Engineering 101

Recall that, smaller integration interval at large |τ|


 estimate is worse at large |τ|
 variance of the estimator large as τ → ∞
 ST ( ) becomes noisy
Data window can not reduce the estimator variance
To reduce the variance, smoothed spectrum used:
2T
S w ( )   w( )R
2 T
T ( )e  j d

w(t) = lag window W(ω) = spectral window


1
 bias E{S w ( )}  S xx ( ) * W ( )
2
Variance is reduced when duration of w(t) is small,
but bias is reduced when duration of W(ω) is small.
Anamitra Makur School of Electrical & Electronic Engineering 102
Markoff Chain and Markoff Process
1) Discrete-time Markoff Chains:
11
(discrete states a1 , a2 etc.) 13
Markoff process xn with 12 …
states ai
a1  21 a2 a3

 31
state probabilities pi [n]  P{x n  ai } i  1, 2,...

transition probabilities  ij [ n1 , n2 ]  P{x n2  a j | x n1  ai }

Anamitra Makur School of Electrical & Electronic Engineering 103

All outgoing transitions  j


ij [n1 , n2 ]  1

All incoming transitions  p [k ]


i
i ij [ k , n]  p j [ n ]

Chapman Kolmogoroff equation for any n1 < n2 < n3


 ij [ n1 , n3 ]    ir [n1 , n2 ] rj [n2 , n3 ]
r

If transition probabilities are invariant to a shift,


 ij [ n1 , n2 ]   ij [m] where m  n2  n1
then the process is called homogeneous,
and the CK equation becomes  ij [n  k ]    ir [k ] rj [n]
(where k  n2  n1 , n  n3  n2 ) r

Anamitra Makur School of Electrical & Electronic Engineering 104


For a finite state Markoff chain   11[n] L  1N [n] 
transition matrix [n]   M O M 
 
 N 1[n] L  NN [n]

from CK equation, [ n  k ]  [n][k ]  [n]   n [1]

State probability vector P[n]   p1[n] L pN [n]


 P[0] n [1]

Anamitra Makur School of Electrical & Electronic Engineering 105

Stationary Markoff chain (invariant distribution), if


P[2]  P[1]  P  P[n]  P n

 P[1]  P
or P is an eigenvector of the transition matrix

Asymptotically stationary Markoff chain


if Πn[1] tend to a limit as n →∞ .

Otherwise, P[n] depends on n, and the process is not stationary.

Anamitra Makur School of Electrical & Electronic Engineering 106


Example 8: Two relationship status: in a Relationship, Single
Probability that R breaks up next day is 0.1
Probability that S finds a boy/girlfriend next day is 0.5

Transition probabilities invariant with time  homogeneous


Given 1 =0.1, 1 =0.5, we can find the transition matrix
0.9 0.1 0.1
[1]   
 0.5 0.5 0.9 R S 0.5
0.5
Find the probability that R remains R after 2 days:
From CK equation, transition matrix for 2 days is
0.86 0.14
[2]   2 [1]   
 0.7 0.3 
Anamitra Makur School of Electrical & Electronic Engineering 107

Given state probability vector P[0]  1 0 , the probability


after 2 days is P[2]  P[0][2]   0.86 0.14
 R remains R after 2 days with probability 0.86

Find the probability that a student is in a relationship:


The steady state probability of this stationary Markoff chain is
P[1]  P
P  [1]  I   0
 0.1 0.1 
 p1
p2      0 0
 0.5 0.5
0.1 p1  0.5(1  p1 )  0
p1  0.83, p2  0.17
 A student is in a relationship with probability 0.83

Anamitra Makur School of Electrical & Electronic Engineering 108


2) Continuous-time Markoff Chains:
 ij (t1 , t2 )
(discrete states a1 , a2 etc.)
x(t)
x(t) = staircase function with ai aj
discontinuities at random points t
t1 t2

State probabilities pi (t )  P{x(t )  ai }

Transition probabilities  ij (t1 , t2 )  P{x(t2 )  a j | x(t1 )  ai }

All outgoing transitions  j


ij (t1 , t2 )  1

Anamitra Makur School of Electrical & Electronic Engineering 109

All incoming transitions  p (t )


i
i 1 ij (t1 , t2 )  p j (t2 )

Chapman Kolmogoroff equation for any t1 < t2 < t3


 ij (t1 , t3 )    ir (t1 , t2 ) rj (t2 , t3 )
r

Homogeneous process  ij (t1 , t2 )   ij ( ) where   t2  t1

CK equation becomes  ij (   )    ir ( ) rj ( )
(where   t3  t2 ) r

In vector form  (   )   ( ) ( )  ,  0

Anamitra Makur School of Electrical & Electronic Engineering 110


transition probability rates ij   ij (0 )

Differentiating  ij ( )  1 we obtain  ij  0


j j

1 i  j  0 i  j
Also  ij (0)   [i  j ]    ij 
0 i  j  0 i  j
 11 L 1n 
define  (0 )   M O M 

n1 L nn 
Differentiate CK equation with respect to α,
 (   )   ( ) ( )
Set α=0,  ( )   ( ) (0)

Anamitra Makur School of Electrical & Electronic Engineering 111

With initial condition  (0)  I ,



the solution to these differential equations is  ( )  e (0)

Similarly, state probability vector P(t )   p1 (t ) L pN (t )

It follows from  p (t )
i
i 1 ij (t1 , t2 )  p j (t2 ) that
P (t   )  P (t ) ( )
Differentiate with respect to τ, P(t   )  P(t ) ( )

Set τ=0, P(t )  P(t ) (0)

Its solution is P (t )  P (0)e(0)t

Anamitra Makur School of Electrical & Electronic Engineering 112


x(t)
Example 9: Telegraph signal A

Two states a1  A, a2   A t
−A
Given
 11 (t )  P{x(t  t )  A | x(t )  A}  1  1t
 22 (t )  P{x(t  t )   A | x(t )   A}  1   2 t
we can find  12 (t ),  21 (t ) and hence ij   ij :
12
  1 1 
11 A −A 22  (0)   
 2 2 
21
  1 1 
P(t )  P(t ) (0)  [ p1 (t ) p2 (t )]  [ p1 (t ) p2 (t )]  
  2  2 
Anamitra Makur School of Electrical & Electronic Engineering 113

p1 (t )   1 p1 (t )   2 p2 (t )
 ( 1   2 ) p1 (t )  2 since p2 (t )  1  p1 (t )
2
Solution: p1 (t )  ce ( 1  2 ) t  for some c
( 1  2 )

Initial condition p1 (t ) |t 0  p1 (0)

  2   ( 1 2 ) t 2
 p1 (t )   p1 (0)   e 
 1  2  1   2

This process is asymptotically stationary since


2 1
p1 (t )   p1 , p2 (t )   p2
t     t    
1 2 1 2

Anamitra Makur School of Electrical & Electronic Engineering 114


Transition probabilities  ( )   ( ) (0)
  12    1 1 
  11 
 21  22   2   2 

 for example,  11 ( )   1 11 ( )  2 12 ( )


 ( 1  2 ) 11 ( )  2 since  12 ( )  1   11 ( )
2
Solution:  11 ( )  ce  ( 1 2 )  for some c
1  2

Initial condition  11 (0)  1


  11 ( )  p1  p2e  ( 1  2 )
 22 ( )  p2  p1e  ( 1 2 ) etc.

Anamitra Makur School of Electrical & Electronic Engineering 115

(  2  1 ) A
Mean: E{x(t )}  p1  A  p2  (  A) 
1   2
Autocorrelation: P{x(t   )  a j , x(t )  ai }  pi   ij ( )

E{x(t   ) x(t )}
 A2  p1   11 ( )  p2   22 ( )  A2  p1   12 ( )  p2   21 ( )

 A2 ( p1  p2 ) 2  4 p1 p2 e  ( 1  2 ) 

  2  4 A2 p1 p2 e  ( 1  2 )

Anamitra Makur School of Electrical & Electronic Engineering 116

You might also like