0% found this document useful (0 votes)
30 views70 pages

Comm Ch02 Random en 2

This document provides an overview of key concepts in principles of communications including: - Signals and noise in communication systems, with signals containing desired information and noise being undesired. - Random variables, random processes, and spectra are topics to be covered in the textbook. - Signals can be characterized as continuous or discrete time and value, and as deterministic or random. Noise is one of the most critical concepts affecting communication systems.

Uploaded by

Seham Raheel
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views70 pages

Comm Ch02 Random en 2

This document provides an overview of key concepts in principles of communications including: - Signals and noise in communication systems, with signals containing desired information and noise being undesired. - Random variables, random processes, and spectra are topics to be covered in the textbook. - Signals can be characterized as continuous or discrete time and value, and as deterministic or random. Noise is one of the most critical concepts affecting communication systems.

Uploaded by

Seham Raheel
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

PrinciplesofCommunications

Meixia Tao
Dept. of Electronic Engineering Shanghai Jiao Tong University Chapter 2: Signal, Random Process, and Spectra Textbook: 2 2.1 1-2 2.6, 6 5 5.1 1-5 5.3 3
1

Signal and Noise in Communication Systems


In communication systems, y , the received waveform is usually y categorized into the desired part containing the information and the extraneous or undesired part. The desired part is called the signal, and the undesired part is called noise. noise Noise is one of the most critical and fundamental concepts affecting communication systems The entire subject of communication systems is all about methods to overcome the distorting or bad effects of noise To do so, understanding random variables and random processes becomes quite essential

Typical noise source


2010/2011 Meixia Tao @ SJTU

Topics to be Covered
2.1.Signals 2.2.Review of probability and random variables 2.3.Random Processes: basic concepts 2.4.Guassian and White Processes

2010/2011 Meixia Tao @ SJTU

What is Signal?
Any physical quantity that varies with time, space, or any other independent variables is called a signal In communication systems systems, signals are used to transmit information over a communication channel. Such signals are called information-bearing signals

2010/2011 Meixia Tao @ SJTU

Classification of Signals
Signals can be characterized in several ways
Continuous-time signal vs. discrete-time signal Continuous valued signal vs vs. discrete-valued signal
o o o o Continuous-time and continuous valued: analog signal (speech) Discrete-time and discrete valued: digital g signal g (CD) ( ) Discrete-time and continuous valued: sampled signal Continuous-time and discrete valued: quantized signal

2010/2011 Meixia Tao @ SJTU

2010/2011 Meixia Tao @ SJTU

Deterministic signal g vs. random signal g

2010/2011 Meixia Tao @ SJTU

Energy and Power


Energy Power P

Ex =

T /2

T /2

x(t ) 2 dt = lim

T /2

x(t ) 2 dt

1 2 Px = lim x ( t ) dt T T T /2

A signal is an energy signal if and only if Ex is finite A signal is a power signal if and only if Px is finite Physically realizable waveforms are of energy-type Mathematical models are often of power-type

2010/2011 Meixia Tao @ SJTU

Probability
Let A be an event in a sample space S The probability P(A) is a real number that measures the th likelihood lik lih d of f the th event tA Axioms A i of fP Probability b bilit
1) 2) 3) Let A and B are two mutually exclusive events, i.e.
Then
2010/2011 Meixia Tao @ SJTU

Elementary Properties of Probability

When A and B are NOT mutually exclusive

If

then

2010/2011 Meixia Tao @ SJTU

10

Conditional Probability
Consider two events A and B in a random experiment p The probability that event A will occur GIVEN that B has occurred, P(A|B), is called the conditional probability The probability that both A and B occur, is called the joint probability Joint J i t and d conditional diti l probabilities b biliti are related l t db by
P( AB ) = P( B ) P( A | B ) = P( A) P( B | A) P( AB ) P( AB ) P ( A | B ) = Alternatively, P( B | A) = P( B ) P( A) Two events A and B are said statistically independent iff

then

and
2010/2011 Meixia Tao @ SJTU

11

Law of Total Probability


Let with be mutually exclusive events

Then for any y event B we have

2010/2011 Meixia Tao @ SJTU

12

Bayes Theorem Bayes


An extremely useful relationship for conditional probabilities is Bayes theorem Let are mutually exclusive events such that and B is an arbitrary event with nonzero probability. Then

This formula will be used to derive the structure of the optimal receiver
2010/2011 Meixia Tao @ SJTU

13

Example
Consider a binary communication system
P(0) = 0.3 P(1) = 0.7 07
P01 = P(receive 1 | sent 0) = 0.01 P00 = P(receive 0 | sent 0) = 1- P01 = 0.09 P10 = P(receive 0 | sent 1) = 0.1 P11 = P(receive 1 | sent 1) = 1- P10 = 0.9

What is the probability that the output of this channel is 1? Assuming that we have observed a 1 at the output output, what is the probability that the input to the channel was a 1?
14

2010/2011 Meixia Tao @ SJTU

Random Variables (r.v.)


A r.v. is a real real-valued valued function assigned to the events of the sample space S. denoted by capital letters X, Y, etc

A r.v. r v may be Discrete-valued: range is finite (e.g. {0,1}), or countable infinite (e.g. (e g {1,2,3 {1 2 3 }) }) Continuous-valued: range is uncountable infinite (e.g. )
2010/2011 Meixia Tao @ SJTU

15

The Cumulative distribution function (CDF), or simply the probability distribution of a r.v. X, is
FX ( x ) = P ( X x )

Key properties of CDF


1. 0 FX(x) 1 with 2. FX(x) is a non-decreasing function of x 3. F (x1 < X x2) = FX(x2) FX(x1)

16

Probability Density Function (PDF)


The PDF, of a r.v. X, is defined as Key properties of PDF
1.

d f X ( x ) = FX ( x ) dx
p X ( x) 0

or

FX ( x ) = f X ( y )dy
x

2.

p X ( x)dx = 1

3. P ( x1 < X x2 ) = PX ( x2 ) PX ( x1 ) =

x2

x1

p X ( x)dx
Area = P ( x1 < X x2 )
1

FX ( x )
1

f X (x )

P( x1 < X x2 )
Xmin 0 x1 x2 Xmax x Xmin 0

Area = f X ( x )dx

d Xmax x1 x2 x x+dx

x
17

2010/2011 Meixia Tao @ SJTU

Joint Distribution
In many situation, one must consider TWO or more r.v.s r.v. s joint distribution function Consider 2 r r.v. v s s X and Y, Y joint distribution function is defined as F ( x, y ) = P( X x, Y y )
XY

and joint PDF is


2 FXY ( x, y ) f XY ( x, y ) = xy

Key properties of joint distribution

p XY ( x, y )dxdy = 1
y2 y1

P( x1 < X x2 , y1 < Y y2 ) =

x2

x1

p XY ( x, y )dxdy
18

2010/2011 Meixia Tao @ SJTU

Marginal distribution
PX ( x) = P( X x, < Y < ) =
PY ( y ) =
y

p XY ( , )dd

p XY ( , )dd

Marginal density
p X ( x) = p XY ( x, )d

X and Y are said to be independent iff


PXY ( x, y ) = PX ( x) PY ( y ) p XY ( x, y ) = p X ( x) pY ( y )
2010/2011 Meixia Tao @ SJTU

19

Statistical Averages
Let X be a r r.v. v Then X can either continuous or discrete. For the moment, consider a discrete r.v. which takes on the p possible values x1, x2, , , xM with respective probabilities P1, P2, , PM. Then the mean or expected value of X is
m X = E [X ] = xi Pi
i =1 M

where E[] denotes the expectation operation ( t ti ti ll averaging) (statistically i )

2010/2011 Meixia Tao @ SJTU

20

If X is continuous, continuous then


mX = E [ X ] = xf X ( x )dx

This is the first moment of the random variable X Let g( g(X) ) be a function of X, , then
E [g ( X )] = g ( x) p X ( x)dx

F For the th special i l case of f g(X) (X) =xn, we obtain bt i th the nth moment of X, that is
EX

[ ]=
n

x n p X ( x )dx

Let n = 2, 2 we have the mean mean-square square value of X as


EX

[ ]=
2

x 2 p X ( x )dx
2010/2011 Meixia Tao @ SJTU

21

n-th th Central moment is


n ( x mx ) f X ( x )dx The expected value of second central moment (n=2) ( ) is called variance

E ( X mX ) =
n

2 X = E [( X mX )2 ]

[ = E [X ] m
2

= E X 2 2 mX X + m2 X
2 X

X, square-root of the variance, is called the standard deviation. It is the average distance from the mean, a measure of the concentration of X around the mean
2010/2011 Meixia Tao @ SJTU

22

Correlation
In considering multiple varaibles, the joint moments like correlation and covariance between pairs of r.v.s are most useful Correlation of the two r.v. X and Y is defined as
RXY = E [ XY ] =

xyf XY ( x, y )dxdy

Correlation of X and Y is the mean of the product X and Y Correlation of the two centered r.v. X-E[X] and Y-E[Y], is called the covariance of X and Y
Cov XY = E [( X E[ x])(Y E[ y ])]
23

The covariance of X and Y normalized w w.r.t. r t X Y is referred to the correlation coefficient of X and Y:
XY =
C ( XY ) Cov

XY

X and Y are uncorrelated iff their correlation coefficient is 0 XY = 0 X and Y are orthogonal iff their correlation is 0
R XY = E[ XY ] = 0

If X and Y are independent, p , then they y are uncorrelated. However, the converse is not true (The Gaussian case is the only exception)
24

Some Useful Probability Distributions


Discrete Distribution
Binary distribution Binomial distribution

Continuous Distribution
Uniform distribution Gaussian distribution (most important one) Rayleigh distribution (very important in mobile and wireless i l communications) i ti )

2010/2011 Meixia Tao @ SJTU

25

Binary Distribution
Let X be a discrete random variable that has two possible values, say X = 1 or X = 0. Distribution of X can be described by yp probability y mass function (p (pmf) )

This is frequently y used to model binary y data Mean: Variance


2010/2011 Meixia Tao @ SJTU

26

Binomial Distribution
Let , where binary r.v.s with are independent

Then Th
where

That is, the probability that Y = k is the probability that k of the Xi are equal q to 1 and n-k are equal q to 0 Mean: Variance: V i
27

Example
Suppose that we transmit a 31-bit 31 bit long sequence with error correction capability up to 3 bit errors If the probability of a bit error is p = 0.001, 0 001 what is the probability that this sequence is received in errors?

If no error correction is used, the error probability is

2010/2011 Meixia Tao @ SJTU

28

Uniform Distribution
The Th pdf df of f uniform if distribution di t ib ti i is given i b by

Any example?

2010/2011 Meixia Tao @ SJTU

29

Gaussian Distribution
The Gaussian distribution, also called normal distribution, is by y far the most important p distribution in the statistical analysis of communication systems The PDF of a Gaussian r.v. is
1 2 f X ( x) = exp 2 ( x mX ) 2 2 X 2 X 1

A Gaussian r.v. is completely determined by its mean and variance, i and dh hence usually ll d denoted t d as
2 ) X ~ N (m X , X

pX(x)

mX

x
30

2010/2011 Meixia Tao @ SJTU

The Q Q-Function Function


The Q-function is a standard form to express error probabilities without itho t a closed form
Q( x ) =
x

u2 1 exp du 2 2

The Q-function is the area under the tail of a Gaussian p pdf with mean zero and variance one

Extremely important in error probability analysis!!!


31

More about Q-Function Q Function


Q Q-function u c o is s monotonically o o o ca y dec decreasing eas g Some features

Craigs alternative form of Q-function (IEEE MILCOM91)

Upper U b bound d If we have a Gaussian variable X ~ N ( , 2 ) , then


x Pr( X > x) = Q
2010/2011 Meixia Tao @ SJTU

32

Joint Gaussian Random Variables


X1, X2, , Xn are jointly Gaussian iff

x is a column vector m is the vector of the means C is the covariance matrix

2010/2011 Meixia Tao @ SJTU

33

Two-Variate Two Variate Gaussian PDF


Given two r r.v.s: v s: X1 and X2 that are joint Gaussian

Then

2010/2011 Meixia Tao @ SJTU

34

For uncorrelated X and Y Y, i i.e. e

X1 and X2 are also independent

If X1 and X2 are Gaussian and uncorrelated, then they are independent.


2010/2011 Meixia Tao @ SJTU

35

Rayleigh Distribution
0.7

0.6 0.5

0.4

0.3 0.2

0.1

0 0 1 2 3 4 5 6

Rayleigh distributions are frequently used to model fading for non non-line line of sight (NLOS) signal transmission Very important for mobile and wireless communications
2010/2011 Meixia Tao @ SJTU

36

Sums of Random Variables


Consider a sequence of r.v.s r.v. s Weak Law of Large Numbers
Let A And d assume th that t Xis are uncorrelated l t d with ith th the same mean and variance Then

So what?

i.e. the average converges to the expected value


2010/2011 Meixia Tao @ SJTU

37

Central Limit Theorem


Let be a set of independent random variables with common mean and common variance Next let Th Then as , the th distribution di t ib ti of f Y will ill t tend dt towards d a Gaussian distribution Key Conclusion: the sum of random variables is Gaussian Thermal noise results from the random movement of many electrons it is well modeled by a Gaussian distribution.
2010/2011 Meixia Tao @ SJTU

38

Example

2010/2011 Meixia Tao @ SJTU

39

2010/2011 Meixia Tao @ SJTU

40

2010/2011 Meixia Tao @ SJTU

41

2010/2011 Meixia Tao @ SJTU

42

2010/2011 Meixia Tao @ SJTU

43

2010/2011 Meixia Tao @ SJTU

44

Random Process
A random process is the natural extension of random variables when dealing with signals Also referred to as stochastic process or random signal Voices signals, TV signals, thermal noise generated by a radio receiver are all examples of random signals signals.

2010/2011 Meixia Tao @ SJTU

45

A random process can be described as For each experiment n, there exists a time-function xn(t) , called a sample function or realization of the random process At any time instant t1, t2, , the value of the random process is a random variable X(t1), ) X(t2), ) ,
x1(t)
Outcome of 1st experiment i t Outcome of 2nd experiment

x2(t) Sample space S xn(t)

46

X(t1)
t1

X(t2)
t2

Outcome of nth experiment

2010/2011 Meixia Tao @ SJTU

Statistics of Random Processes


By sampling the random process at any time time, we get a random variable F From this thi view i point, i t we can thi think k of f a random d process as an infinite collection of random variables i bl specified ifi d at t ti time t: {X(t1), ) X(t2), ) , X(tn)} Thus, , a random process p can be completely p y defined statistically as a collection of random variables indexed by y time with p properties p defined by a joint PDF
2010/2011 Meixia Tao @ SJTU

47

A random process X(t) is described by its M-th order statistics if for all and all the joint pdf of {X(t1), ) X(t2), ) , X(tn)} is given This joint pdf is written as
f ( x1 , x2 ,K, xn ; t1 , t2 ,K, tn )

In order to completely specify a random process process, one must given f (x1 , x2 ,K, xn ; t1 , t2 ,K, tn ) for all possible values of {xi} {ti}, } and for all n n. This is obviously quite difficult in general
48

First Order Statistics on Random P Processes


The first order statistics is simply the PDF of a random variable at one particular time f(x;t) = first order density of X(t) F(x;t) = P(X(t) x), first order distribution of X(t) Mean
E [ X (t0 )] = E [ X (t = t0 )] = xf X ( x; t0 ) = X (t0 )

Variance

2 E X (t0 ) X (t0 ) = X (t0 )

2010/2011 Meixia Tao @ SJTU

49

Second-Order Statistics on Random Processes


Second-order statistics means the j joint PDF of X(t ( 1) and X(t2) for all choices t1 and t2. Auto-correlation function: Let t1 = t, and t2 = t+,
RX (t; ) = E[ X (t ) X (t + )] =

x1 x2 f ( x1 , x2 ; t , t + )dx1dx2

The physical meaning of RX(t; ) is a measure of the relationship p of the function X(t) ( ) and X(t+ ( ) (correlation ( within a process) In general, the autocorrelation function is a function of both t and .

2010/2011 Meixia Tao @ SJTU

50

Example
Given a stochastic process is a random variable uniformly distributed from At each h t, X(t) can b be viewed i d as a f function ti of f The mean is The auto-correlation is , where to

2010/2011 Meixia Tao @ SJTU

51

Stationary Processes
A stochastic process is said to be stationary if for any n and the following holds:
f X ( x1 , x2 ,L xn ; t1 , t2 ,L tn ) = f X ( x1 , x2 ,L xn ; t1 + , t2 + ,L tn + ) (1)

Therefore,
Te first-order statistics is independent of t
mean E { X (t )} =

xf X ( x)dx = mX

(2) ( )

The second-order statistics only depends on the gap between t1 and t2


Autocorrelation RX (t1 , t2 ) = function

1 2

x x f X ( x1 , x2 , t2 t1 )dx1dx2 where = t2 t1 (3)


52

= RX (t2 t1 ) = RX ( ) ,

2010/2011 Meixia Tao @ SJTU

Wide-Sense Wide Sense Stationary


Our engineers often care about the first- and secondorder statistics only A random process is said to be WSS when conditions (2) and (3) hold process is said to be strictly y stationary y when A random p condition (1) holds Example:
, where

Only depends on the time difference

Thus, X(t) is WSS


2010/2011 Meixia Tao @ SJTU

53

Averages and Ergodic


Ensemble averaging g g

X (t ) = E [X (t )] = xp( x; t )dx

RX (t1 , t 2 ) = E [X (t1 ) X (t 2 )] =

x1 x2 p ( x1 , x2 ; t1 , t 2 )dx1dx2

Time averaging g g 1 T < X (t ) > = lim x(t )dt T 2T T 1 T < X (t ) X (t ) > = lim li x(t ) x(t )dt T 2T T In general, ensemble averages and time averages are not equal A r.p. X(t) is said to be Ergodic if all time averages and Ensemble averages are equal
2010/2011 Meixia Tao @ SJTU

54

Random Processes in the Frequency D Domain: i Power P Spectral S t l Density D it


Let e X(t) ( ) de denote o e a random a do p process ocess a and d let e sample function of this process g by y defining g Truncate the signal de denote oea

in order to get an energy signal. Performing a Fourier transform on According to Parseval theorem 2 2 x ( t , n ) dt = X ( f , n ) df T T

, we get

: energy gy spectral p density y


2010/2011 Meixia Tao @ SJTU

55

Then the power spectral density is the average energy spectral density per time unit, i.e.

Letting , we define the power spectral density for the sample function: 2
X T ( f , n) S X ( f , n) = lim T T

If we take the ensemble average, the power spectral density (PSD) of the random process is
(4)
Watts/Hz

The general definition of power spectral density


2010/2011 Meixia Tao @ SJTU

56

PSD of Stationary Process


Wiener-Khinchin Wiener Khinchin theorem
For a stationary random process X(t), the PSD is equal to the Fourier Transform of the autocorrelation function, i.e.,
S X ( f ) RX ( )
RX ( ) = S X ( f ) exp( j 2f )df

S X ( f ) = RX ( ) exp( j 2f )d

In general, is a measure of the relative power in the random d signal i l at each hf frequency component
RX (0) = S X ( f )df = total power

2010/2011 Meixia Tao @ SJTU

57

Gaussian Process
The importance of Gaussian processes in communication systems is due to that thermal noise in electronic devices can be closely modeled by a Gaussian process

Definition:
A random process X(t) is a Gaussian process if for all n and all (t1, t2, , tn), the random variables {X(t1), X(t2), ( ), , , X(tn)} ( )} have a joint j Gaussian density y function

2010/2011 Meixia Tao @ SJTU

58

Properties of Gaussian Processes


If a Gaussian random process is wide-sense wide sense stationary, then it is also stationary Any sample point from a Gaussian random process is a Gaussian random variable If the h input i to a linear li system i is a G Gaussian i random d process, then the output is also a Gaussian random process. process

2010/2011 Meixia Tao @ SJTU

59

Random Process Transmission Through Li Linear Systems S t


Consider a linear system
X(t) Impulse response p h(t)

Y(t)

Y (t ) = X (t ) * h(t ) = h( ) X (t )d

The mean of the output random process Y(t)


Y (t ) = E [Y (t )] = h( ) E [X (t )]d

If X(t) is WSS

= h( ) X (t )d

= X h( )d = X H (0)

2010/2011 Meixia Tao @ SJTU

where H(0) is the zero-frequency response of the system


60

The autocorrelation of Y(t)


RY (t , u ) = E [Y (t )Y (u )]
= E h( 1 ) X (t 1 )d 1 h( 2 ) X (u 2 )d 2

= h( 1 )d 1 h( 2 ) E [X (t 1 ) X (u 2 )]d 2

If X(t) is WSS

RY ( ) =

h( 1 )h( 2 ) RX ( 1 + 2 )d 1d 2

If input is a WSS random process, the output is also a WSS random process
2010/2011 Meixia Tao @ SJTU

61

Relation Among the Input Input-Output Output PSD


Autocorrelation of Y(t)
RY ( ) =

h( 1 )h( 2 ) RX ( 1 + 2 )d 1d 2

= h( 2 )d 2 h( 1 ) RX ( + 2 1 )d 1

= h( 2 )[h( + 2 ) * RX ( + 2 )]d 2

= h( ) * h( ) * RX ( )

PSD of Y(t): SY ( f ) = H ( f ) 2 S X ( f )
X(t)

SX ( f )

h(t)
2

Y(t)

SY ( f )
Key ey Results esu ts
62

SY ( f ) = H ( f ) S X ( f )

2010/2011 Meixia Tao @ SJTU

Noise
Noise is a critical component in the analysis of the performance of communication receivers Often assumed to be Gaussian and stationary The mean is taken to be zero while the autocorrelation is usually specified by the power spectral density The noise is a white noise, when all frequency components appear with equal power (white is used in white light for a similar reason) N Sn ( f ) = N 0 2 Rn ( ) = 0 ( ) 2 White noise is completely
Sn ( f ) Rn ( )

uncorrelated!
N0 2

N0 2

2010/2011 Meixia Tao @ SJTU

63

Bandlimited Noise
White noise Filter Bandwidth Hz Bandlimited white noise n(t)

In most applications N 0 = KT = 4.14 1021 = 174 dBm/Hz

At what sampling rate to sample the noise can we get uncorrelated realizations?
2010/2011 Meixia Tao @ SJTU

64

Narrow-Band Narrow Band Random Process


The bandwidth of the signal is limited to a narrow band around a central frequency fc >> 0
S ( )

fc

Canonical form of a narrow band process


X (t ) = X I (t ) cos ( 2 f 0t ) X Q (t ) sin ( 2 f 0t )
In-phase component Quadrature component
2010/2011 Meixia Tao @ SJTU

65

Narrow band Noise


Let n(t) be a zero-mean, zero mean, stationary noise
n(t ) = nc (t ) cos 0t ns (t ) sin 0t

Find Fi d the th statistics t ti ti of f nc(t) and d ns(t) Result 1


E {n(t )} = E {nc (t )} = E {ns (t )} = 0

Proof:

E [ n(t ) ] = E [ nc (t ) ] cos 0t E [ ns (t ) ] sin 0t E [ n(t ) ] = 0

Since n(t) is stationaryzero-mean, zero-mean for any t t, we have Thus

E {nc (t )} = E {ns (t )} = 0
2010/2011 Meixia Tao @ SJTU

66

Result 2
S n ( f f 0 ) + S n ( f + f 0 ), f B / 2 S nc ( f ) = S ns ( f ) = otherwise h i 0

Proof
n(t )

Z (t)
1

HL ( f )

1 B/2 f

nc (t )

2cos 0t 2sin 0t

-B/2 0

Z2 (t )

HL ( f )

1 B/2 f

ns (t )

-B/2 0

2010/2011 Meixia Tao @ SJTU

67

Result 3for the same t t, nc(t) and ns(t) are uncorrelated or independent
Rnc ns (0) = 0

Result 4

E {n 2 (t )} = E {nc 2 (t )} = E {ns 2 (t )} = 2

Result 5If n(t) is a Gaussian process, so are nc(t) and ns(t)

2010/2011 Meixia Tao @ SJTU

68

Envelop and Phase


Angular representation of n(t)
n(t ) = R(t ) cos [0t + (t )] R(t ) = nc 2 (t ) + ns 2 (t )

envelop phase h

where

(t ) = tan 1

ns (t ) nc (t )
1 B

[0 (t ) 2 ]
R(t )

n(t )

1 f0

2010/2011 Meixia Tao @ SJTU

69

Let n(t) be a zero-mean, stationary Gaussian process, find the statistics of the envelop and phase
Result Envelop follows Rayleigh distribution while phase follows uniform distribution
f ( R) =
2 0

P f Proof

f ( ) =

R2 f ( R, )d = 2 exp 2 2 1 f ( R, ) dR = 0 2 2 R

R0

For o the e sa same e t, , the ee envelop e op variable a ab e R a and dp phase ase variable are independent (but not the two processes)

2010/2011 Meixia Tao @ SJTU

70

You might also like