0% found this document useful (0 votes)
13 views

Field Guide To Probability Random Processes and Random Data Analysis

The document discusses random variables and their probability distributions. It defines key concepts like discrete and continuous random variables. It also discusses the cumulative distribution function and how functions of random variables can transform their distributions. An example is provided of how the square of a random variable would change its probability density function.

Uploaded by

lantordo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Field Guide To Probability Random Processes and Random Data Analysis

The document discusses random variables and their probability distributions. It defines key concepts like discrete and continuous random variables. It also discusses the cumulative distribution function and how functions of random variables can transform their distributions. An example is provided of how the square of a random variable would change its probability density function.

Uploaded by

lantordo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Probability: One Random Variable 3

Random Variables and Cumulative Distribution

A probability distribution shows the probabilities


observed in an experiment. The quantity observed in a
given trial of an experiment is a number called a random
variable (RV). In the following, RVs are designated by
boldface letters such as x and y.

• Discrete RV: a variable that can only take on certain


discrete values.
• Continuous RV: a variable that can assume any
value within a specified range (possibly infinite).

For a given RV x, there are three primary events to


consider involving probabilities:

{x ≤ a}, {a < x ≤ b}, {x > b}

For the general event {x ≤ x}, where x is any real number,


we define the cumulative distribution function (CDF)
as

Fx ( x) = Pr(x ≤ x), −∞ < x < ∞

The CDF is a probability and thus satisfies the following


properties:

1. 0 ≤ Fx ( x) ≤ 1, −∞ < x < ∞
2. Fx (a) ≤ Fx (b), for a < b
3. Fx (−∞) = 0, Fx (∞) = 1

We also note that

Pr(a < x ≤ b) = Fx (b) − Fx (a)


Pr(x > x) = 1 − Fx ( x)

Field Guide to Probability, Random Processes, and Random Data Analysis


Probability: One Random Variable 15

Functions of One RV

In many cases, an examination is necessary of what hap-


pens to RV x as it passes through various transformations,
such as a random signal passing through a nonlinear de-
vice. Suppose that the output of some nonlinear device
with input x can be represented by the new RV:

y = g(x)

If the PDF of x is known to be f x ( x), and the function


y = g( x) has a unique inverse, the PDF of y is related by

f x ( x)
f y ( y) =
| g 0 ( x )|

If the inverse of y = g( x) is not unique, and x1 , x2 , . . . , xn are


all of the values for which y = g( x1 ) = g( x2 ) = · · · = g( xn ), then
the previous relation is modified to

f x ( x1 ) f x ( x1 ) f x ( xn )
f y ( y) = + +···+ 0
| g 0 ( x 1 )| | g 0 ( x 1 )| | g ( xn )|

Another method for finding the PDF of y involves the


characteristic function. For example, given that y = g(x),
the characteristic function for y can be found directly from
the PDF for x through the expected value relation
Z ∞
Φy ( s) = E [ e isg(x) ] = e isg( x) f x ( x) dx
−∞

Consequently, the PDF for y can be recovered from


characteristic function Φy (s) through inverse relation

Z
1 ∞
f y ( y) = e− is y Φy ( s) ds
2π −∞

Field Guide to Probability, Random Processes, and Random Data Analysis


16 Probability: One Random Variable

Example: Square-Law Device

The output of a square-law device is defined by the


quadratic transformation

y = a x2 , a>0

where x is the RV input. Find an expression for the PDF


f y ( y) given that we know f x ( x).

Solution: We first observe that if y < 0, then y = ax2 has no


real solutions; hence, it follows that f y ( y) = 0 for y < 0.
For y > 0, there are two solutions to y = ax2 , given by
r r
y y
x1 = , x2 = −
a a

where

p
g0 ( x1 ) = 2ax1 = 2 a y
p
g0 ( x2 ) = 2ax2 = −2 a y

In this case, we deduce that the PDF for RV y is defined


by
·µr ¶ µ r ¶¸
1 y y
f y ( y) = p fx + fx − U ( y)
2 ay a a

where U ( y) is the unit step function.


It can also be shown that the CDF for y is
· µr ¶ µ r ¶¸
y y
Fy ( y) = Fx − Fx − U ( y)
a a

Field Guide to Probability, Random Processes, and Random Data Analysis


54 Random Processes

Example: Correlation and PDF

Consider the random process x( t) = acos ω t + bsin ω t,


where ω is a constant and a and b are statistically
independent Gaussian RVs, satisfying
〈a〉 = 〈b〉 = 0, 〈a2 〉 = 〈b2 〉 = σ2

Determine
1. the correlation function for x( t), and
2. the second-order PDF for x1 and x2 .

Solution: (1) Because a and b are statistically independent


RVs, it follows that 〈ab〉 = 〈a〉〈b〉 = 0, and thus
R x ( t 1 , t 2 ) = 〈(acos ω t 1 + bsin ω t 1 )(acos ω t 2 + bsin ω t 2 )〉
= 〈a2 〉 cos ω t 1 cos ω t 2 + 〈b2 〉 sin ω t 1 sin ω t 2
= σ2 cos[ω( t 2 − t 1 )]

or
R x ( t 1 , t 2 ) = σ2 cos ωτ, τ = t2 − t1
(2) The expected value of the random process x( t) is 〈x( t)〉 =
〈a〉 cos ω t + 〈b〉 sin ω t = 0. Hence, σ2x = R x (0) = σ2 , and the
first-order PDF of x( t) is given by
1 2 2
f x ( x, t) = p e− x /2σ
σ 2π
The second-order PDF depends on the correlation
coefficient between x1 and x2 , which, because the mean
is zero, can be calculated from
R x (τ )
ρx (τ) = = cos ωτ
R x (0)
and consequently,
à !
1 x12 − 2 x1 x2 cos ωτ + x22
f x ( x1 , t 1 ; x2 , t 2 ) = exp −
2πσ2 | sin ωτ| 2σ2 sin2 ωτ

Field Guide to Probability, Random Processes, and Random Data Analysis


Transformations of Random Processes 73

Memoryless Nonlinear Transformations

Consider a system in which the output y( t1 ) at time t1


depends only on the input x( t1 ) and not on any other past
or future values of x( t). If the system is designated by the
relation
y( t) = g[x( t)]
where y = g( x) is a function assigning a unique value of
y to each value of x, it is said that the system effects a
memoryless transformation. Because the function g( x)
does not depend explicitly on time t, it can also be said
that the system is time invariant. For example, if g( x) is
not a function of time t, it follows that the output of a time
invariant system to the input x( t + ε) can be expressed as
y( t + ε) = g[x( t + ε)]

If input and output are both sampled at times t1 , t2 , . . . , t n


to produce the samples x1 , x2 , . . . , xn and y1 , y2 , . . . , yn ,
respectively, then
yk = g(xk ), k = 1, 2, . . . , n

This relation is a transformation of the RVs x1 , x2 , . . . , xn


into a new set of RVs y1 , y2 , . . . , yn . It then follows that the
joint density of the RVs y1 , y2 , . . . , yn can be found directly
from the corresponding density of the RVs x1 , x2 , . . . , xn
through the above relationship.

Memoryless processes or fields have no memory of other


events in location or time. In probability and statistics,
memorylessness is a property of certain probability
distributions—the exponential distributions of non-
negative real numbers and the geometric distributions
of non-negative integers. That is, these distributions are
derived from Poisson statistics and as such are the only
memoryless probability distributions.

Field Guide to Probability, Random Processes, and Random Data Analysis

You might also like