Part 1 Random Processes For
Part 1 Random Processes For
Communications
System Models
o A good mathematical model for a system is the
basis of its analysis.
o Two models are often considered:
n Deterministic model
o No uncertainty about its time-dependent behavior at
any instance of time
n Random or stochastic model
o Uncertain about its time-dependent behavior at any
instance of time
o but certain on the statistical behavior at any instance of
time
© Po-Ning [email protected] 1-2
Examples of Stochastic Models
o Channel noise and interference
o Source of information, such as voice
o Independence of events P ( B | A) = P ( B )
n A knowledge of occurrence of event A tells us no
more about the probability of occurrence of event B
than we knew without this knowledge.
n Hence, they are statistically independent.
A short-hand for
autocorrelation function
of a stationary process
© Po-Ning [email protected] 1-17
Autocorrelation
o Conceptually,
n Autocorrelation function = “power correlation”
between two time instances t1 and t2.
n “Variance” is the degree of variation to the standard
value (i.e., mean).
Hence,
with equality holding when
m(t)
T
x(t)=A cos(2pfct)
Local carrier Carrier wave
w(t)=0 Accos(2pfct)
cos(2pfct+Q)
0110… >0 yT T s(t)
< ò0
dt Ä Å Ä
correlator
A
= (sin(q + 2pf ct ) ) -p
p
2p
A
= (sin(p + 2pf ct ) - sin( -p + 2pf ct ) )
2p
= 0.
© Po-Ning [email protected] 1-30
Example: Signal with Random Phase
RX (t1 , t2 ) = E [ A cos(2pf c t1 + Q) × A cos(2pf c t2 + Q)]
p 1
= A ò cos(q + 2pf c t1 ) cos(q + 2pf c t2 )
2
dq
-p 2p
A2 p 1
= ò (cos[(q + 2pf c t1 ) + (q + 2pf c t2 )]
2p -p 2
+ cos[(q + 2pf c t1 ) - (q + 2pf c t2 )])dq
A2 p 1
= ò (cos(2q + 2pf c (t1 + t2 ) ) + cos(2pf c (t1 - t2 ) ))dq
2p -p 2 =0
A2
= cos(2pf c (t1 - t2 ) ). Hence, X(t) is WSS.
2
© Po-Ning [email protected] 1-31
Example: Signal with Random Phase
RX(t)
I-3 I0
m(t) = p(t)
No (or ignore)
x(t) = A p(t) w(t)=0 carrier wave
0110… >0 yT RT s(t)
Å
td
< td
dt
correlator
RT
An equivalent view: dt X(t) = A p(t−td) 1-35
0
Example: Signal with Random Delay
n By assuming that td is uniformly distributed over [0,
T), we obtain:
é ¥
ù
µ X (t ) = E ê å A × I n × p(t - nT - td )ú
ën = -¥ û
¥
= å A × E[ I
n = -¥
n
] × E[ p(t - nT - td )]
¥
= å A × 0 × E[ p(t - nT - t )]
n = -¥
d
=0
© Po-Ning [email protected] 1-36
Example: Signal with Random Delay
n A useful probabilistic rule: E[X] = E[E[X|Y]]
So, we have:
Since E [ I n I m ] = E [ I n ]E [ I m ] = 0 for n ¹ m.
© Po-Ning [email protected] 1-38
Among <n< , there is at most one n that can make
p(t1 nT td )p(t2 nT td ) = 1.
RX(t)
R X ,Y (t , u ) = R X ,Y (t - u )
é RX (t - u ) RX ,Y (t - u )ù
=ê ú
ë RY , X (t - u ) RY (t - u ) û
RX I ,XQ
(t , u ) = E[ X I (t ) X Q (u )]
= E[ X (t ) cos(2pf c t + Q) × X (u ) sin(2pf c u + Q)]
= E[ X (t ) X (u )]E[sin( 2pf c u + Q) cos(2pf c t + Q)]
=0
é sin(2pf c (t + u ) + 2Q) + sin(2pf c (u - t )) ù
= RX (t , u ) E ê úû
ë 2
1
= - sin(2pf c (t - u )) RX (t , u )
2
RX I ,XQ (t , t ) = 0
[ ]
average
1. Pr lim µ X (T ) = µ X = 1, and
T ®¥
2. lim Var [µ X (T )] = 0
T ®¥
where
1 T
µ X (T ) =
2T ò
-T
X (t )dt
[
1. Pr lim RX (t ; T ) = RX (t ) = 1, and
T ®¥
]
2. lim Var [RX (t ; T )] = 0
T ®¥
where 1 #
𝑅! 𝜏; 𝑇 = ( 𝑋 𝑡 + 𝜏 𝑋 ∗ 𝑡 d𝑡
2𝑇 "#
© Po-Ning [email protected] 1-54
Ergodicity
o Experiments (or observations) on the same
process can only be performed at different time.
o “Stationarity” only guarantees that the
observations made at different time come from
the same distribution.
n Example. Rolling two different fair dices will get
two results but the two results have the same
distribution.
¥
E [ g ( X )] = ò g ( x ) f X ( x )dx
-¥
X (t ) Y (t )
(a 2 ,t 2 )
Transmitter Receiver
(a 3 ,t 3 )
X (t ) Y (t )
Transmitter Receiver
¥
Y (t ) = ò h(t ) X (t - t )dt
-¥
êë - ¥ úû
¥ ¥
= ò h(t ) E [ X (t - t )]dt = µ X ò h(t )dt
-¥ -¥
Impulse
1 Response
h(t)