0% found this document useful (0 votes)
10 views55 pages

RVSP Unit 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views55 pages

RVSP Unit 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Momeyt

s i e n c e momet is measUTE f eyerg


which geyeraes the fTequeycy 3) Statistic s
momeyts aTe the the arthimetic me a of the first
Se cond ,
Lhird and so on t h power of -Lje deviation
taken from either mcay o r an OTbitory Tointfa
d i s t i bution
statistics, geyeraly i) amy freque ncy distributioy
3 momerts ae C6Laime d which escTibé he inform
- ation about mean, v a n a n c e , Ske wnes

be c t assified in raw and centrat


Mome nts can
momenis Raw moments are mea Sured aBout any
Szcro Lhen
Laken Lo be
anbitoy point A sf 'A
about rig). Jr
raw moments a r e called momen
S Laken to Be arthime tic me an
e
qet CeniralL
A

momets.
TYPes momeyis ae

aout o7bitary paint


Moment
about mean
2) Moment
3)Mome)t about onigi
about onbitary point:
Mome moments aYe
1
ir fYac-tion,
one
onen actual mea and then
oabout oan bLary
poit
calculated
-fist aoutt
aciual me.This
to momeyts
conveted be as
OrbitaTY Point Ca
fiveY)
moment abou

n cose
ofm A)/n]
n caSe of u- 2 M2
fCr-A)"/1
Momet about oigin
case when we take ay crbitary point A o'
then oe get moment about mg'n. which can be

lefinedas
which can be cqual to

ECX]
3n case of no
o
n case of =
- P ] -E[7 ey
n case n
m2
Crnean)
[ cx-o/ P] - Er*
Moment about mean (oY) Central meoan
ohen voe take the deviation from the actual
e a n and calculate the moments, this i s now
as moment about mean CoT) CenLral mea
Moment about mean can be defined as M

Ay E[x-] " = aApfi ( - " N


dn case ofa n =o

case of = 1
M E[x -] - Elx -x] ELJ-E[

XX =
caSe ofn2

2 E x - ] Vartance
n coSe n3
M3 E Ix-X] Skewne.
NOte
Sn case
of eay
a) Ex d
1) I ncse or continuOus
fom)
bEX-*pc) C3 caSe of iscvcic
fom)
c) case a vaiamce
Ex- f Er) d (3 cose
of
Contiuous
The above terms which Cre
defied fovm)
in continuouS fOT are
Obtaine d fro
EX1 fCr) dz
)3.3 Properties of Expectation
X=a then E[a] where a is a constant.
(1) Ifa random variable Xis a constant, i.e., =
a,

(3.8)

Proof
We know that

ELX) =f , (x\dr

afy(x)dr

=a d
= axl= a

E[a = a

(2) If a is any constant, then


E[aX] = a E[X] (3.9)
Proof

Elax] =

ax fw) =axf ()de


=a E[X]
146 Probability Theory and Stochastic Processes

(3) If a and h are any two constants, then


Elak +b]- aE]X| +h
Proof

ElaX-h)-tak -b)f,()- arf,(rydr - b , . r d r

=aE[X]|-b
(4) If X 20, then ETXJ20.
Proof

IfX15 a continuous random variable such that20, then

LX S,(rdr=fryd -f,(xd
01rd
Since0 and f,(x) 2 0. Then

EX] =f , (z)dr e 0

Hence if z20, then EX]20


(5) If X is any random variable, then the inequality.
ETXEX exists

Proof

We know that XX and also -X <

Then EX]s EX, and E-X1SE X or EX]< E

EX]E X

(6) If g(x) and g,(r) are two functions of a random variable X. then

Ez(z)+8.()) =E[g (z)] -Elg,(X)]


Proof
We know that

Eld = z)1, ryd


Els,)+g,(w)) =|_l8(r)+8,(0)S,(xdr

8,(05, ds+ 8,05,xnd


=
Els,(x)]+ Els,(x)]
Similarlyfor n functions, E[ls,()+8,(t)++g, ()] = Elg, (r)]+Els,(x)]++Elg, (x)]

Example 3.2

IfX is a discrete random variable with probability mass function givenias in Table 3.1, find
) E[X], () E[2X +3]. (ii) E[X*], and (iv) E[(2X +1?'].

Table 3.1 Probability massfunction ofX

X-2-10 1 | 2
PX)
10 10

(i) Xis a discrete random variable. We know that

ELX]=xPC)) x =

i=l
P)

From Table 3.1, ELX]=-

22,1.2

ELX] -0.3
(ii) E[2AX+3]
From the properties, we know that
E2X +3] = 2E[X]+3

=2-0.3]+3=-0.6+3=2.4
(ii) To find out E[ X], let g(x) = X.
Then, we know that

Elet))=s)px) = Pa)
ELX'] =2.1
(iv) To find out E[(2X + 1)*], let g(x) = (2.X +1)' = 4X* +4X +1

E[4X +4X +1] =


4E[x*]+4E[X]+1

-4L10 10
1-4121-73
1010l=i7.3
/3.5.3 Properties of Variance
constant, then 0 (3.26)
The variance of a constant is zero, i.e., if kis a var (&)
=

(1)
Proof
var (X) =
E[(X -

X}]
var(k) E[(k -k]= E[tk -k)*] E[O]
= = = 0

(2) Ifk is a constant, then for a random variable X

var (kX)=k var(X) (3.27)

Proof
The variance of k X is given by

var(kX) =
E|(KX- k XY'1

EJk (N-X}]
=k E|(X -X¥|
var (kX)=k var( X)
(3)
(3) For a given random variable X, the relationship between the variance and the momens
is given by
(3.25)
Proof

Tet Ybe a random variable. The variance is

o=E[(X-X]
E[X+X -2X X]
E[X'J+E;X 1-2E[X X]
EX]+X -2X ELX]

= ELX']+X -2XX

O =E[X*]-X
or x =m-m
random variable and a, b are real constants, then
(4) If X is a

var (a X +b) = a* var (X)

Proof
The variance of (aX +b) is

var(a X+b) =
E[(aX +b)-(aX +b))]

We know that, F(aX +b) =aE[X]+b


var (aX +b) =
E[(aX +b-aE[X]-b)]|
= E[(aX -aE[X])* ]

va
=a'E{{X -X1
=a var(X)
and X, are independent, then
f random variables
Itwo X,
var(X +X,)
=
var(X,)+var(X,)
and var (X, -X,) = var(X,) +var(X)

Proof

var(X +X)
=
E[(X, +X,)- (X, +X,))]E

We know that X, +X, = X +X2


So var(X, +X,) =
E[CX, + X, -X, -

X,}1
=E(X-X,)+(X, -X,)*]
var(X+X,) =E{(X, -X]+EI(X, -X,P]+2E[(X,-X, MX, -X, ))
Since X, and X, are independent,
EX-XMx, -R,) -ELX, -F ELX, -F,]-( -¥), -¥)=0
Therefore, var(X, +X,) = var(X)+var (X,
Similarly, var(X, -X,) = var(X,) + var (-X,)

=
var(X,) +(-1)° var(X,)
var(X-X,) = var X, + var (X,)

3.5.4 Relationship between Central Moments and Moments about the Origin
Let X be a random variable. Then the nth central moment is given by

=E[CX - Xy]
We know from binomial theorem that,

(X-X 2-", x- k=0

The expected value is

EX--E-CXx =0

k=0

But we know that

X =
m, =
mean value
and E[X"-*] m, =
=(n-k)" moment

ET(X-XY] =
4, -

2-1' "C,m m, (3.31)


k0
This expresion is used to find the
central moments of a random
about the origin are known. For variable when the moments
example, if n 2, the =
variance is,

a-4,--Cm m
=(-1 Cm"m, +(-1) 'Cm,n, +(-1
*C,mjm
m 1 , -2mj ++m, "m,
But m=1

m , -m
T7=s,

k=0

=
(-1° "Cm'm, +(-1)' 'C,m,m, + (-1)* °C,m,m,
+(-1" °C3m, m,
We know that m, =1

m-3m,m, +3 m - m
m,-3m,(m, -m)-m
In terms of oj, we know that

O m, - m,

3
So m-3m, ox-m

4 X-3Xo -F
O7) 4, can also be obtained directly from

=E[X--3Xx*+3X'x]
=
E[X']-EX'J-3E[X X']+3ELX
=ELx'1-F-3FELX*]+3X'T
= E[X']-*-3XE;X? -F'j

nis
Cpression gives skew of the density function when moments are knoWn.,,
Examnia
3.6 Functions for Moments
To calculate the nth moments of a random variable X, two functions are generally used:
Characteristic function and (2) moment generation function.
(1)
3.6.I Characteristic Function

Consider a random variable X with a probability density function f,(x), Then the expected
value of the function e i s called the characteristic function. It is expressed as
x ()=Ele*] (3.33)
It is a function of the real variable, -oo < o<o, where j is an imaginary operator.
O () = e Srdr (3.34)
and for a discrete random variable
o, (o) = e p(x)
(3.35)
The characteristic function transforms the random variable X into another real variable o
It can be expressed as a Fourier transform of Í(x) with the sign of o reversed.
Therefore, the inverse Fourier transform of o, (o) gives the probability density function
with the sign of x is reversed. Hence

(3.36)
S - e ,(o)e" do
So the functions , (w) and f,(x) are Fourier transform pairs with the sign of the variable
reversed.

Theorem
If o, (o) is a characteristic function of a random variable X, then the-x moment of X is

given by
(3.37)
d"
a-0

Proof
Consider a random variable X with the characteristic function
oy ( ) = E[e

We know that the series expansion of eio is

eO=1+ jwr+UOX) (jwX joX)


2! 3! n!
w'X' joX+
n
3!

Then
n'X t..
j»X
, ()= Ele0= E|1+ 2 3!

= I+ joE[X]-EX]-/E[X']+..

2 3!

( ) = m, + jom,- 2
/a " t
To find the moments, substitute ø = 0.
0 ) = m, =l|
So,
Differentiation of o , ) with respect to o gives

3w3
dox )= jm-0m,
d 3
At w=0,

do, () = jm
do l@=0

first order is (o)


moment m, =(- dw lo-0

Second time differentiation of O(o) with respect to o is

d o)
dw
=
-m, -

jom,
At o =
0

dw m,=j"m,

second order moment is m, =


do,(
dos

Similarly, the nth time diftereniation of o,(o) with respect to o is


d , ()
d
d'o, ( Proved.
m=(-Jdo"
nh moments of a random variable X can be obtained from the nth differentiation
Thus the

the characteristic
function at o=0.
the Characteristic Function
6.2 Properties of
The characteristic function is unity at w =0, and given by

o ()=@,(0) = 1 (3.38)

Proof
The characteristic function is given by

( ) =[e
At o=0, e0) =
E[e"']= E[|] =
1

The maximum amplitude of the characteristic function is unity at o= 0,

ie,
e. 10,()I so,(0)
OT ( ) s1 (3.39)
Proof
Let the characteristic function
x ) =E[e
The amplitude of o,(0) 1s

0,() = E]e|

Since XY SX
||Y

lo,()s S , d
Since e =1

9,(si
(3) 0)1S a continuous function of o in the range -o<< o. (3.40)
Proof
We know that,

, () eS, (r)dr
Since w s continuous, e is also continuous. Hence o, () is a continuous function.

(4) 0,-o) and o, (o) are conjugate functions.


That is, x-)=o" ()
and - ) =O,() (3.4
Proof
We know that

(o) =E[]
-o)-E[el]
o-)-E[e]=E[eT
Ox-) =, ( 0 )

Similarly,
- ) = El el-

-)=E|T
o - ) =Ele-ox

-)=El*]
- ) =, ()
(5) Ifo, (o) is a characteristic
function of a random variable X, then the
function of Y =aX +b is
characteristic
given by
, (0)= e , (aw), where a and b are real constants.
Proof (3.42)
Given Y aX +h

Then , () =
E| emdal +b)
= Eea
, () =eo, (wa)
If , ()is a characteristic function of a random variable X, then o,(co) = 0, (0),
(6)
where c is a real constant. (3.43)
Proof
We know that

o()= Ee
then x(co) = Ele e

x(co) = r ()

(7) If X, and X, are two independent random variables, then

Ox )=ox,(0), (o) (3.44)


Proof

Given () =Ee
Then xx,() = E|ea**3)|=E|eleeax
Since X, and X, are independent,
-E[eE[*]
x-x, ()=O», (0) Ox, ()
Example 3.5
Find the characteristic function of a uniformly distributed random variable X in the range
0,1] and hence find m
Solution
For a uniformly distributed random variable, the density function is

1
aSx sb
Sr(x) = b-a
0 otherwise

In the range [0, 1], a =0, b=1


0 otherwise

The characteristic function is

o,() " "f, (x)dx

e d
j

y ()=l e-
r
1
jw
To find m,, we know that

do, ()
m=(-/) do lo0

So, do (o) de-1


do dw jo

do ()=
do

At w=0,

m = (-)
3.6.3 Moment Generating Function
The moment generating function (MGF) of a random variable is also used to
generate the
moments about the origin.
Considera random variable X with a probability density function fx). Then the moment
generating function of X is detined as the expected value of the function e*, It can be
expressed as
MyV) = E[e"]
(3.45)
where V is a real variable -o <V < o.

Therefore M,(V) = " x d


(3.46)
and for a discrete random variable X

M,() =" p(x) (3.47)


ha moments ofX can be derived trom the moment generating function. Here the
The main
erator does not exist.
disadvantage of the moment generating function is that it
not exist for all random variables and all values of v.
But the characteristic function exists for all values of and o

Theorenm

f M, o) is a moment generating function of a random vaniable X, then the r moment of X


s given by

d"M,(v)
m= (3 48)
dv
Proof
Consider a random variable X with moment generating function

M,) = E

We know that the series expansion of e* is

e =1-vX-A.VA.VA
2 3! n

M,) = E " ]

3 n

M,0)-E)-vELX)+EX-LX n -
M,(v) m, +Vm, m+m, . . . m
To find the moments, substitute v = 0.
So My(0) =m, =1

Ditferentiating M, (v) with respect to V,


d
MV)=
dv
m, +vm, **
V m+

At v = 0, dM , (v
dv v =0

Second differentiation of M, (v) with respectto v 1s


d M,(V) 7 , +V , t.
dv'

At v =0. m d M, (
dv v = 0

the nth differentiation of m, (v) with respect


to V, we get
Similarly, at

d'M,)
dv v =0m,
The nth moment of X is given by

m,
d'M,()
dv" V =0 Hence proved.

Thus the nth moments of X can be obtained from the nth differentiation of MGF at v =0.

3.6.4 Properties of the Moment Generating Function


(1) The moment generating function at v =0 is unity. It is given as
M,(v= 0 = M,(0) =1
(3.49)
Proof
The moment generating function is given by
M,) =E[e*]
At v = 0,

M,(0) = E[e°] = E[l] =1

M,0) =1

(2)
(2) LetXbe a random variable with moment
generating function M,(v). Then the moment
generating function for Y =aX +b is given by
My (V) =e" M, (av)
(3.50)
Proof
We know that
M,(V) = Ee*]

Then M,() -
E["]- Efe'ar-y
E
-"E

M,(V)"M, (v)
of a random variable X, then
is a moment generating function
if M,a)
(3.51)
, ) =Ma(V),
where c is a real constant.

Proot
ATnOW that

M , ) = E[e"]

Then Mx (cv) = E[e""] = E[e"ed]


Proved
My(cv) =Mex(V)
random variables with moment generating function
If X and X, are two independent
M, (V) and M(V), then

Mx,-x,(V) =M(V)Mx,(V) (3.52)

Proof
e know that
M(V) = E[e"*]

for M-x, (v) = E[e"r **)]

Mxx,) = Ele *]

e XandX, are independent random variables,


E[X, X,] =
E[X,]E[X, ]
Mx-,) = Ele* JE[e"*
Proved
My-x) =
Mx, (v)M,, (V)
Simile
127ly, if there independent random variables, X, X2, Xg,.., X,
are n
with moment

Taling functions M, (V),M, (v),. M(V) respectively, then

M, (V) =
Mx (v) M, (v). My, (V) (3.53)
,

trample 3.6
plf of arandom variable is given by
Ix) =e' for x 20
M,v ), m and m
Solution
We know that

M,) =
Ele"]= ["o)dr

=eed
-(1-v)* |
= e"dr =
1-v 10

M,) =;1-v

Now m M =0
=

(1-v)v=0
=|

and m
_d M.)| 2
dv2 v=0 (1-v)'|v==0
2
3.7 Inequalities
There are three important inequalities which are very useful in solving
some types of probability
problems
(1)Chebychev's inequality
(2) Markov's inequality
(3) Chernoff's inequality and bound
Chebys hev nequalty
Lwi h mean
1 landom Nanible
and Vaonience Ce:),1hen
and
PIx1t|>ke} e
1andom Vanib
Tot Conside X i a
Vantence C then
mean Cu) ane
Ex Elx1]
EX
,

At-K

-K
71om density ptoPerty f 0,
7
U-
1-K
7 7/ C-AD,d+ (-109,)dz
-O
let Consile
-k
- 2-k
>- K
>(-1 - A
Conside1,
+k
X- kc
(-1)K
7 Ke -
Subsitut A)and (B n quation (1)

7 U-K
CK)f)
-k

-K
Cd+ da

Pxe-ke}+PP X + ke
PX-JN2-ke 4+ Px- k

PRx- ke
Px-2k
PRIx- Ka-k
2
>1Px-
PIx-2K 1k
P
PRx-4 2k -k
Hence Po Ved
3.7.2 Markov's Inequality
Consider a continuous random variable X with pdf f(x). If fcr) = 0 for x <0, then the

Markov's inequality states that,

P{X2 a) s4 for a>0


a

Proof
We know that
P(X2a) = [smd* (3.60)
also since SAx) =0 for x <0

X =E[X]= xf(r)d

(3.61)

Let x = a
Xaf (xd
a)dr s
From (3.60),

or P(X 2a) S- Proved

Note
' (3.62)
If a=X, then P(X2 X) S1

.
3.8 Transformations of a Random Variablee
Transtornation is used to convert a given random variable X into another random variable
YIt is denoted as
Y =T(X)
where T represents transfomation. It may be linear, non-linear, staircase or segmented.
The transtormation of X to Yis shown in Fig. 3.3.

Y
f) T(X)
Fig. 3.3 Transformation of X to Y
Here we need to consider three cases:
Both X and T are continuous and 7 IS ether
nmonotonically inereasing or decreasing
with X
2 Both X and 7 are continuous and T is non-monotome,

(3) i s discrete and T 1scontinuous


of a Continuous Random Variable
3.8.1 Monotonic Transformation

Consider a random variable X. Ifthetransformation.


transformation is TCx) < T(a,) for any x <x,, then it
is called a monotonically increasing
For the transformation to be monotonically decreasing, the condition is T(x,) > T(x,) for

any <
Monotonically increasing function:

Assume that the transformation T is continuous and differentiable for all values ofx with

f,(a) =0. Let another random variable Yhave a value y, corresponding to x, of Xas shown
in Fig. 3.4(a).
The transformation is given as
Y =T(X) (3.67)
Yo =T(x%) or

T(o) (3.68)
where T represents the inverse of the transformation T.
Since transformation provides a one-to-one correspondence between X and Y, the
probability of the event {¥ Sy,} must be equal to the probability ofthe event {X sx,}
Thus KT(,) )dT
P{Y SY}= P{X Sx
F(v) = Fr(r,) (3.69)
or

(3.70)
Using Leibniz's rule, and differentiating both sides with respect to y

dy
dy - dy , Cxd
or ) =f(T 'o,))dTy)
dy
for any Yo

J(y)=1, (7 ')) ()
dy

dx (3.71)
Or J(P) = f, (a)
dy
Monotonically decreasing function:
shown in Fig. 3.4(6), we know thasat
Similarly, for a monotonically decreasing function as

Yo

Monotonically decreasing function


Fig. 3.4 (a) Monotonically increasing function. Fig. 3.4 (b)

P{YSy,} P{X 2 x,} =1-P{X }


= S

F ) =1-Fr()
or S d y =1-S)d

sdy =1- (x)dx (3.72)

Using Leibniz's rule and differentiating with respect to yo


dT
I0%) = f , ( T o ) . " , ) (3.73)
dy
or for any yo

Sv) =f, (7')) O)


dy

(3.74)
dy
For a monotonic transformation, either increasing or decreasing, the density functot
of y is

de (3.75)
3.8.2 Non-Monotonic Transformation of a Continuous Random Variabla
Consider that a random variable Yis a non-monotonic transformation of a random variable y
as shown in Fig. 3.5.

X X2
Fig. 3.5 Non-monotonic function y
For given event {y s y}, there
a is more than one value of X. From Fig. 3.5, it is found
that the event {ys y}corresponds to the events {X Sx, and x, <X < x}.
Thus the probability of the event {Y < y,} is equal to the probability
of the event {X |Y Sy,}
P{Y S yo} = P{X | Y S yo}
(3.76)
or
(3.77)
By differentiating, the density function is given by
dF) .
dy dy, X/YSy%)

or o ) =ay xiYsyJr(x)dr (3.78)


This can be simplified by using Leibniz's rule as
Sr(,)
f,(y)-2dT(x) dx x
)=S) (3.79)
dy
where x,, n =1,2,.. are real roots of the equation y =7T(x)

You might also like