0% found this document useful (0 votes)
67 views6 pages

Lec 12

1) The document discusses probability generating functions (GFs), which are power series representations of probability distributions that allow algebraic operations on sequences to correspond to analytic operations on the GFs. 2) GFs uniquely determine discrete probability distributions. The GF of the sum of independent random variables is the product of their individual GFs. 3) The GF can be used to find the distribution of the sum of random variables, including sums involving random numbers of trials.

Uploaded by

spitzersglare
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views6 pages

Lec 12

1) The document discusses probability generating functions (GFs), which are power series representations of probability distributions that allow algebraic operations on sequences to correspond to analytic operations on the GFs. 2) GFs uniquely determine discrete probability distributions. The GF of the sum of independent random variables is the product of their individual GFs. 3) The GF can be used to find the distribution of the sum of random variables, including sums involving random numbers of trials.

Uploaded by

spitzersglare
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Stat 150 Stochastic Processes

Spring 2009

Lecture 12: Probability Generating Functions


Lecturer: Jim Pitman

(See text, around page 185.) Previous lecture showed the value of identifying a sequence with a corresponding generating function dened by a power series, then recognizing that various operations on sequences correspond to familar algebraic and analytic operations on the generating functions. For instance, if sequences (fn ) and (gn ) have generating functions

F (z ) :=
n=0

fn z and G(z ) :=
n=0

gn z n

then the convolution hn =

fm gnm
m=0

has generating function

H (z ) :=
n=0

hn z n = F (z )G(z )

Discrete random variable X with values 0, 1, 2, 3, . . . Probability P(X = n) = pn , n = 0, 1, 2, . . . Porbability GF (of X , or of p0 , p1 , . . . )

(s) := X (s) :=
n=0

pn sn ,

|s| 1

The generic notation is (s). The subscript X is just used to indicate what random variable X the generating function is derived from.

Lecture 12: Probability Generating Functions

Basic Properties: (1) = 1 [Assuming P(0 X < ) = 1] (0) = p0 = P(X = 0)

(s) =
n=0

npn sn1 ,

|s| < 1

(0) = p1

(1) =
n=0

npn = EX n(n 1)pn sn1 ,


n=0

(s) = (0) = 2p2

|s| < 1

(1) =
n=0

n(n 1)pn = E(X (X 1))

and so on for higher derivatives: evaluating the kth derivative at 0 gives k !pk , and evaluating at 1 gives E [X (X 1) (X k + 1)]. In case any of these moments are innite, so is the derivative evaluated as a limit as s 1. In particular, the rst two derivatives of at 1 give the mean and variance: So E (X ) = 1 (1) E (X 2 ) = 1 (1) + (1) V ar(X ) = E (X 2 ) [E (X )]2 = (1) + (1) ( (1))2 Uniqueness: For X with values {0, 1, 2, . . . }, the function s X (s) for |s| 1, or even for |s| < for any > 0, determines the distribution of X uniquely. Proof: P(X = n) = X (0)/n!. Sums of independent r.v.s Write X (s) for the GF of X , Y (s) for the GF of Y . Assume X and Y are independent. Then X +Y (s) = X (s)Y (s)
(n)

Lecture 12: Probability Generating Functions

because

X (s)Y (s) =
k=0

P(X = k )sk
m=0 n

P(Y = m)sm

=
n=0 k=0

P(X = k )P(Y = n k ) sn P(X + Y = n)sn


n=0

= Alternative proof:

X (s) =
n=0

P(X = n)sn = E[sX ]

X +Y (s) = E[sX +Y ] = E[sX sY ] = E[sX ]E[sY ] because independence of X and Y implies independence of sX and sY , and using the rule for expectation of a product of independent random variables. Exercise: Use probability GF to conrm that the sum of independent Poissons is Poisson. 1) Compute GF of P oi(), where pn = e n /n!:

(s) =
n=0

n n s = es = e(s1) . n!

2)Look at product of P oi() and P oi()s GF: e(s1) e(s1) = e(+)(s1) is the GF of P oi( + ). That is, X P oi(), Y P oi(), X Y = X + Y P oi( + ) Random sums Suppose X1 , X2 , . . . are i.i.d. on {0, 1, 2, . . . }. N is a random index independent of X1 , X2 , . . . Problem: Find the distribution of X1 + + XN = SN . (Note: S0 = 0 by convention)

Lecture 12: Probability Generating Functions

Solution: Apply GF. Let X (s) be the GF of the Xi s.

X (s) = E[s ] =
n=0

P(X = n)sn ,

X = Xi , for any i

Compute by conditioning on N :

X (s) = E[s ] =
n=0

P(N = n)E[sSN |N = n] P(N = n)E[sX1 ++Xn ]


n=0

= =
n=0

P(N = n)[X (s)]n

= N (X (s)) From the GF of SN , N (X (s)), we get formulas for means and variances. Compare with text, rst chapter. Example: Poisson Thinning This is a Stat 134 exercise, much simplied by use of GF. Let X1 , X2 , . . . be independent 0/1 Bernoulli(p) trials. Let N be P oi(), independent of the Xi s. Let SN = X1 + + XN = # of successes in P oi() # of trials. Then SN P oi(p). This can be checked directly, most easily by showing also that SN and N SN are independent and SN P oi((1 p)). But the GF computation is very quick: SN (s) = N (X (s)) = N (q + ps) = e((q+ps)1) = ep(s1) This is the GF of P oi(p), hence the conclusion, by uniqueness of the GF. Compare: Moment GF Usually the Moment GF is dened for a real valued X as

MX (t) := E[etX ] =
n=0

tn E[X n ] n!

provided the series converges in some neighbourhood of t = 0. For discrete X with values in {0, 1, 2, . . .} the change of variables: et = s shows that MX (t) = X (et ) and X (s) = MX (log s)

Lecture 12: Probability Generating Functions

Application of GF to recover a discrete distribution from its factorial moments. Consider a discrete distribution of X on {0, 1, 2, . . . , n}. Let us check that such a distribution is determined by its binomial moments: E X k = E[(X )k ] k!

where (X )k := X (X 1) (X k + 1) is a falling factorial function of X , and E[(X )k ] is the k th factorial moment of X . Note that the k th binomial moment is just some linear combination of the rst k moments of X . Now X (s) = E[sX ] = X (1 + (s 1))
n

=
k=0 n

X (1) E
k=0

(k )

(s 1)k k!

X (s 1)k k

Hence for 0 j n P (X = j ) = Coecient of sj in X (s)


n

=
k=j

(1)kj

k X E j k

In particular, if X is the number of events that occur in some sequence of events A1 , . . . , An , in terms of indicators Xi = 1Ai
n

X :=
i=1

Xi

and then X k =

X ij
1i1 <i2 <<ik j =1

is the number of ways to choose k distinct events Ai among those which happen to occur. So X E = P(Ai1 Ai2 Aik ) k 1i <i <<i
1 2 k

Lecture 12: Probability Generating Functions

is the usual sum appearing in the inclusion exclusion formula for the probability of a union of n events, which can be written in present notation as
n

P (n i=1 Ai )

= P (X 1) =
k=1

(1)k1 E

X k

Since P (X 1) = 1 P (X = 0) and X = 1, this agrees with the previous 0 formula for P (X = j ) in the case j = 0, which is X P (X = 0) = (1) E k k=0
k n n

=1+
k=1

(1)k E

X k

Application to the matching problem Let Mn be the number of matches, that is i with i = n (i), where n is a uniformly distributed random permutation of {1, . . . , n}. Apply the previous discussion with X = Mn and Ai the event i = n (i) to see that E and hence
n

Mn k

=
1i1 <i2 <<ik

P(Ai1 Ai2 Aik ) =

n 1 1 = k (n)k k!

P(Mn = j ) =
k =j

(1)

kj

k 1 1 = j k! j!

(1)kj
k=j

1 (k j )!

which converges as n to the limit 1 j!

k=j

(1)kj e1 = = P (M = j ) (k j )! j!

for a random variable M with Poisson(1) distribution. Thus the limit distribution of the number of matches in a large random permutation is Poisson(1).

You might also like