MGF Mit
MGF Mit
Contents
1
1 MOMENT GENERATING FUNCTIONS
1.1 Definition
MX (s) = E[esX ].
Note that this is essentially the same as the definition of the Laplace transform
of a function fX , except that we are using s instead of −s in the exponent.
2
Exercise 2. Suppose that
log P(X > x)
lim sup , − < 0.
x!1 x
There are explicit formulas that allow us to recover the PMF or PDF of a ran-
dom variable starting from the associated transform, but they are quite difficult
to use (e.g., involving “contour integrals”). In practice, transforms are usually
inverted by “pattern matching,” based on tables of known distribution-transform
pairs.
dMX (s)
ds
| =
d
ds
E[esX ] | = E[XesX ] | = E[X],
|
s=0 s=0 s=0
dm MX (s)
dsm s=0
dm
= m E[esX ]
ds s=0
|
= E[X m esX ]
s=0
|= E[X m ]
3
Thus, knowledge of the transform MX allows for an easy calculation of the
moments of a random variable X.
Justifying the interchange of the expectation and the differentiation does
require some work. The steps are outlined in the following exercise. For sim-
plicity, we restrict to the case of nonnegative random variables.
Exercise 3. Suppose that X is a nonnegative random variable and that MX (s) < ∞
for all s ∈ (−∞, a], where a is a positive number.
(a) Show that E[X k ] < ∞, for every k.
(b) Show that E[X k esX ] < ∞, for every s < a.
(c) Show that (ehX − 1)/h ≤ XehX .
(d) Use the DCT to argue that
h ehX − 1 i E[ehX ] − 1
E[X] = E lim = lim .
h#0 h h#0 h
resulting in
dm
dsm
gX (s) |s=0
= m! pX (m).
|
d X
gX (s) = mpX (m) = E[X].
ds s=1
m1
4
1.6 Examples
d
Example : X = Exp(). Then,
Z 1 ˆ
MX (s) = sx −x
e e dx = −s , s < ;
0 ∞, otherwise.
d
Example : X = Ge(p)
1
(
es p
X
sm m−1 1−(1−p)es , es < 1/(1 − p);
MX (s) = e p(1 − p)
m=1
∞, otherwise.
In this case, we also find gX (s) = ps/(1 − (1 − p)s), s < 1/(1 − p) and
gX (s) = ∞, otherwise.
d
Example : X = N (0, 1). Then,
x2
Z 1
1
MX (s) = √ exp(sx) exp(− )dx
2ˇ −1 2
2
exp(s /2)
Z 1
x + 2sx − s2
2
= √ exp(− )dx
2ˇ −1 2
= exp(s2 /2).
Theorem 2.
5
For part (b), we have
For part (c), by conditioning on the random choice between X and Y , we have
6
implying that
1
X
MY (s) = exp(n log MX (s))P(N = n) = MN (log MX (s)).
n=1
The reader is encouraged to take the derivative of the above expression, and
evaluate it at s = 0, to recover the formula E[Y ] = E[N ]E[X].
Example : Suppose that each Xi is exponentially distributed, with parameter , and
that N is geometrically distributed, with parameter p ∈ (0, 1). We find that
If two random variables X and Y are described by some joint distribution (e.g.,
a joint PDF), then each one is associated with a transform MX (s) or MY (s).
These are the transforms of the marginal distributions and do not convey infor-
mation on the dependence between the two random variables. Such information
is contained in a multivariate transform, which we now define.
Consider n random variables X1 , . . . , Xn related to the same experiment.
Let s1 , . . . , sn be real parameters. The associated multivariate transform is a
function of these n parameters and is defined by
7
Remarks:
8
MIT OpenCourseWare
https://ocw.mit.edu
For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms