0% found this document useful (0 votes)
17 views

Important Random Variables: Binomial

The document discusses important random variables including binomial, geometric, and Poisson distributions. It also covers the Poisson distribution and examples of its applications. Finally, it summarizes key properties of continuous random variables like uniform, Gaussian, and exponential distributions as well as concepts such as mean, variance, and memoryless random variables.

Uploaded by

sudhesh4u2003
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Important Random Variables: Binomial

The document discusses important random variables including binomial, geometric, and Poisson distributions. It also covers the Poisson distribution and examples of its applications. Finally, it summarizes key properties of continuous random variables like uniform, Gaussian, and exponential distributions as well as concepts such as mean, variance, and memoryless random variables.

Uploaded by

sudhesh4u2003
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Important Random Variables

Binomial: SX = { 0,1,2, . . . , n}
n
Pk    p k (1  p ) ( n k ) k  0,1,..., n
k 
0  p 1

Geometric: SX = { 0,1,2, . . . }
Pk  p(1  p) k k  0,1,...,
0  p 1

Poisson: SX = { 0,1,2, . . . }
 k 
Pk  e k  0,1,...,
k!
 0
Poisson Distribution:
Used for modeling number of events in an interval or set if they occur
randomly and independently.

If N = Number of events in an interval T,


 k 
P( N  k )  e
k!
Where α = Event rate  T
= Average number of events per size T Interval.

e.g. Failure rate for chips in a system = 2 / year


210  2
P (10 failures in a year)  e
10!

For a large number of Bernoulli trials with small success rate, p,


Number of successes = Poisson
n
P( k hits in n attempts) =   p k (1  p) ( n  k )
k 

 k 
 e with α = np as n → ∞ and np is constant.
k!
e.g.
 10 6
 10

P (10 jackpots in 10 attempts)  
6
 p (1  p )106 10

10 
(106 p )10 106 p
 e for small p
10!
Examples of Poisson Distribution Applications
1)
P(bit error in a communication line)  10 3
P( 10 errors in 1000 bit block)  ?
n 1000 , p  10 3 , np  1000(10 3 )
1000  3 k
  
10
P(10 errors)  1  P(10 errors)  1     10 1  10 3 1000  k

k 0  k 

1 
10
1000(10 ) 3 k 10
1
k 0 k!
e 1000(10 3 )
 1 e 1
 k!
k 0

2)
P(defective component)  10 3
number of components  1000
P( machine functions)  P (number of defectivecomponents)


1000 (10 ) 3 0

e 1000(10
3
)

0!
 e 1
Continuous Random Variables
Uniform: S X  [a, b]
1
f X ( x)  a xb
ba
ab

Gaussian: S X  [, ]
2 2
e ( x  m ) 2
f X ( x) 
2 

Exponential: S X  [0, ]
f X ( x )   e  x x0
 0
Mean: 
E( X ) 

 sf X ( S ) ds

also called the expected value of X.

If X is Discrete

E ( X )   xk PX ( xk )
k

E(X) does not exist for all random variables.


It requires that

E X   sf X ( s ) ds 


or x P x 
k
k X k 

for Discrete X
Variance:

 X2  Var ( X )  E [ X  E ( X )]2 
 E ( X 2 )  ( E ( X )) 2

Standard Deviation = √Var


Variance measures the dispersion of X about the mean.

Moments: 
n th Moment (X) = E ( X ) 
 f X ( x) dx
n n
x

n th Central Moment = 
E ( X  E ( X ))n 
n
n th Absolute Moment = E( X )

n th Generalized Moment about a = E((X a) )


n
Markov Inequality:
E( X )
P( X  a)  if X  0
a
[useful only if a  E ( X )] a0
Chebychev Inequality:
2
P( X  m  a)  2 a 0
a
m  E( X )

Beinayme Inequality:
n
E( X  b )
P( X  b  a)  a 0
an
Chebychev inequality is a special case of this with b=m, n=2
Memoryless Random Variables:
A random variable , X , is called memoryless if, for h>0
P ( X  x  h | X  x )  P ( X  h)

i.e. the incremental probability of x+h is independent of x.


Context is meaningless.

Geometric is the only memoryless discrete Random Variable.


Exponential is the only memoryless continuous Random Variable.
P( X  x  h, X  x) P ( X  x  h)
P( X  x  h | X  x)  
P( X  x) P ( X  x)
 e  h  P ( X  h )
Gaussian Random Variable
1 2
22
f X ( x)  e ( x m )
2 

1
x fX(x)
2  
( s  m ) 2 22
FX ( x)  e ds  

There is no closed form for


FX (x)
m x

The Gaussian distribution is also called the normal distribution


and is often popularly referred to as the bell curve.

It is found to be a good model for random variables in many real-world systems,


and has many useful properties (as we will see later in the class).
Standard Gaussian Random Variable

x
1 Note:
e
 y2 2
Define G ( x)  dy The textbook uses  instead
2  of G, but we will later use 
( xm)  for something else.
 xm 1
e
 y2 2
FX ( x)  G  dy
   2 

X m
 G ( y ) is the cdf of random variable Y  .

 Y is called a standard Gaussian random variable.
x
1
 dy
2
y 2
 G ( x)  e
2 

is called the standard Gaussian cdf.


x
1
 dy  erf ( x) is called the error function.
2
y 2
e
2 0

G ( x )  erf ( x )  1 x  0
2

G( x ) 1  erf (  x ) x  0
2
Define Q (x)  1  G (x)
x 
1 1 1
e e
y2 2 2
 1  dy  y 2
dy   erf ( x )
2  2 x
2

1
Q( 0 ) 
2
Q ( x)  1  Q ( x)

  1  x2
Q (x)    1 
1
1
 e 2
for 0  x  
  1  x 
   
x2 2 
 2
De Moivre-Laplace Theorem:

if np(1-p) >> 1
n k 1
  p ( 1  p )( n  k )  2
e ( k  np ) / 2 np (1 p )
k  2np(1  p)
 
i.e. Binomial  Gaussian for large n and finite p

where m = np and  = np(1-p)

DL 
if k ~ binomially
k2n k
P(k1  k  k 2 )     p (1  p) n  k
 
k  k1 k
 

 k 2  np   k1  np 
G    G 
 np (1  p)   np(1  p ) 
   
Functions of Random Variables:
If X is a random variable
Y = g(X) is also a random variable.

E ( g ( X ))   g ( x) f

X ( x) dx
Any event {g(X) ≤ a} can be seen as a union of events in SX . This is called the
equivalent event.
e.g.

g(x)

{g ( X )  a}  {i, j , k}
a

i j k x

P( g  X   a)  P( X  i )  P( X  j )  P( X  k )
Example: y+dy
y

x1 x1 + dx1 x2 x2 + dx2
x3 x3+ dx3

P( y  Y  y  dy)  P( x1  X  x1  dx1 )  P( x2  X  x2  dx2 )

 P( x3  X  x3  dx3 )

if t  length of t

fY ( y ) dy  f X ( x1 ) dx1  f X ( x2 ) dx2  f X ( x3 ) dx3

In General, if y=g(X) has n solutions, {xk}

dx
fY ( y)  
k
f X ( x)
dy x  xk

You might also like