0% found this document useful (0 votes)
114 views

Probability & Random Variable Cheat Sheet

This document defines and explains key concepts in probability and statistics including: 1. Expectations, probability density functions, joint probability density functions, conditional expectations, moments, variance, and independence of random variables. 2. Properties of variance, covariance, correlation, moment generating functions, and relationships between functions of random variables. 3. Multivariate distributions, correlation matrices, covariance matrices, and properties of linear transformations of random variables.

Uploaded by

adkinsbl
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views

Probability & Random Variable Cheat Sheet

This document defines and explains key concepts in probability and statistics including: 1. Expectations, probability density functions, joint probability density functions, conditional expectations, moments, variance, and independence of random variables. 2. Properties of variance, covariance, correlation, moment generating functions, and relationships between functions of random variables. 3. Multivariate distributions, correlation matrices, covariance matrices, and properties of linear transformations of random variables.

Uploaded by

adkinsbl
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Expectations

E [X ] =

P (x1 < X x2 , y1 < Y y2 ) = FX,Y (x2 , y2 )FX,Y (x2 , y1 )FX,Y (x1 , y2 )+FX,Y (x1 , y1 ) P (X = xi ) xi
xX

xfX (x)dx E [X ] = xfX (x|A) dx g (x) fX (x) dx

Joint Probability Density Function (joint pdf)


FX,Y (x, y ) di.fX,Y (x, y ) = fX (x) =
2 FX,Y (x,y ) xy

E [X |A] =

E [g (X )] =

fX,Y (x, v ) dv

E [a1 g1 (X ) + an gn (X )] = a1 E [g1 (X )] + an E [gn (X )]

Statistical Independence of Random Variables


If X Y then,
FX,Y (x, y ) = FX (x) FY (y ) var (Y ) fX,Y (x, y ) = fX (x) fY (y ) var (X + Y ) = var (X )+

Variance
2 X

= E (X E [X ])

=E X

(E [X ]) 0

var (X + Y ) = var (X ) + var (Y ) 2 var (X + c) = var (X ) = X var (cX ) = c2 var (X )

If E [XY ] = E [X ] E [Y ] then X and Y are uncorrelated In general, for independent X1 , . . . , Xn var ( Xi ) = var (Xi )

Moments
mn (X ) = E [X n ] =

Two Functions of Two Random Variables


J (x1 , y1 ) = v/x w/x v/y w/y
x=x1 ,y =y1

xn fX (x) dx

fX,Y (x, y ) = J (v, w) =

fV,W (v1 ,w1 ) |J (v1 ,w1 )|

Moment Generating Function


gX (r) = E [exp (rX )] =
d dr gX

1 J (x,y )

v/x w/x

v/y w/y

erx fX (x) dx

(r ) =

d rx dr e fX

(x) dx =

xerx fX (x) dx

Joint Moments
mh,l (X, Y ) = E X h Y l =

(r) r=0 = E [X n ] = mn (X ) Y = X + a gY (r) = E [exp (r (X + a))] = era gX (r) Z = cX gZ (r) = E [exp rcX ] = gX (cr) gX (j ) = gX (r)|r=j Markov Inequality: Let X be a non-negative rv st E [X ] < P (X > x)
E [X ] x

dn dr n gX

Correlation: RXY = m1,1 (X, Y ) = E [XY ] Covariance: KXY = E [XY ] E [X ] E [Y ] K Correlation Coecient: XY =
XY X Y

xh y l fX,Y (x, y ) dxdy

2 Chebychev Inequality: Let X be a rv st X < P (|X E [X ]| > Cherno Inequality:

2 ) X 2

Multidimensional Case
X= X1 Xn
T

E X =

Multiple Random Variables


Joint Distribution Function (joint cdf) FX (x) = P
n

Correlation Matrix:
{Xi xi }
i=1

E XX T

E [X1 ] E [Xn ] 2 X1 X1 Xn . . .. . . =E . . . 2 Xn X1 Xn
T T

Covariance Matrix:

KX = E (X E [X ]) (X E [X ])

FX,Y (, y ) = FX,Y (x, ) = 0 FX,Y (x, ) = FX (x) FX,Y (, y ) = FY (y ) FX,Y (, ) = 1 P (x1 < X x2 , Y y ) = FX,Y (x2 , y ) FX,Y (x1 , y )

Cross-Covariance Matrix:

Z = AX + BY , E [X ] = E [Y ] = 0

T KXY = E XY T E [X ] E [Y ] = KY X

KZ = AKX AT + BKY B T + AKXY B T + BKY X AT

There exist distinct orthonormal eigenvectors s.t.


KX = QQT

You might also like