M5A42 Applied Stochastic Processes Problem Sheet 1 Solutions Term 1 2010-2011
M5A42 Applied Stochastic Processes Problem Sheet 1 Solutions Term 1 2010-2011
1
e
d, > 0.
SOLUTION
(a)
E(X) =
_
+
xf(x) dx =
_
+
0
xe
x
dx
=
1
.
E(X
2
) =
_
+
x
2
f(x) dx =
_
+
0
x
2
e
x
dx
=
2
2
.
1
Consequently,
var(X) = E(X
2
) (EX)
2
=
1
2
.
The characteristic function is
(t) = E(e
itx
) =
_
0
e
itx
e
x
dt =
it
.
(b)
E(X) =
_
+
xf(x) dx =
_
b
a
x
b a
dx
=
a + b
2
.
E(X
2
) =
_
+
x
2
f(x) dx =
_
b
a
x
2
x
2
b a
dx
=
b
2
+ ab + a
2
3
.
Consequently,
var(X) = E(X
2
) (EX)
2
=
(b a)
2
12
.
The characteristic function is
(t) = E(e
itx
) =
_
b
a
e
itx
1
b a
dt =
e
itb
e
ita
it(b a)
.
(c)
E(X) =
()
_
+
0
x
e
x
dx
=
( + 1)
()
=
.
E(X
2
) =
_
+
0
x
1+
e
x
dx
=
( + 2)
2
()
=
( + 1)
2
.
Consequently,
var(X) = E(X
2
) (EX)
2
=
2
.
The characteristic function is
(t) = E(e
itx
) =
()
_
0
e
itx
x
1
dt
=
()
1
( it)
_
0
e
y
y
1
dy
=
( it)
.
2
2. (a) Let X be a continuous random variable with characteristic function (t). Show that
EX
k
=
1
i
k
(k)
(0),
where
(k)
(t) denotes the k-th derivative of evaluated at t.
(b) Let X be a nonnegative random variable with distribution function F(x). Show that
E(X) =
_
+
0
(1 F(x)) dx.
(c) Let X be a continuous random variable with probability density function f(x) and characteristic
function (t). Find the probability density and characteristic function of the random variable
Y = aX + b with a, b R.
(d) Let X be a random variable with uniform distribution on [0, 2]. Find the probability density of
the random variable Y = sin(X).
SOLUTION
(a) We have
(t) = E(e
itx
) =
_
R
e
itx
f(x) dx.
Consequently
(k)
(t) =
_
R
(ix)
k
e
itx
f(x) dx.
Thus:
(k)
(0) =
_
R
(ix)
k
f(x) dx = i
k
EX
k
,
and EX
k
=
1
i
k
(k)
(0).
(b) Let R > 0 and consider
P(X < R) =
_
R
0
xf(x) dx
=
_
R
0
x
dF
dx
dx
= xF(x)|
R
0
_
R
0
F(x) dx
=
_
R
0
(F(R) F(x)) dx.
Thus,
EX = lim
R
P(X < R)
=
_
0
(1 F(x)) dx,
3
where the fact lim
x
F(x) = 1 was used.
Alternatively:
_
0
(1 F(x)) dx =
_
0
_
x
f(y) dydx
=
_
0
_
y
0
f(y) dxdy
=
_
0
yf(y) dx = EX.
(c) We have:
P(Y y) = P(aX + b y)
= P(X
y b
a
)
=
_ yb
a
f(x) dx.
Consequently,
f
Y
(y) =
y
P(Y y) =
1
a
f
_
y b
a
_
.
Similarly,
Y
(t) = Ee
itY
= Ee
it(aX+b)
= e
itb
Ee
itaX
= e
itb
(at).
(d) The density of the random variable X is
f
X
(x) =
_
1
2
, x [0, 2],
0, x / [0, 2].
The distribution function is
F
X
(x) =
_
0 x < 0,
x
2
, x [0, 2],
1, x > 2.
The random variable Y takes values on [1, 1]. Hence, P(Y y) = 0 for y 1 and
P(Y y) = 1 for y 1. Let now y (1, 1). We have
F
Y
(y) = P(Y y) = P(sin(X) y).
The equation sin(x) = y has two solutions in the interval [0, 2]: x = arcsin(y), arcsin(y)
for y > 0 and x = arcsin(y), 2 + arcsin(y) for y < 0. Hence,
F
Y
(y) =
+ 2 arcsin(y)
2
, y (1, 1).
4
The distribution function of Y is
F
Y
(y) =
_
0 y 0,
+2 arcsin(y)
2
, y (1, 1),
1, y 1.
We differentiate the above expression to obtain the probability density:
f
Y
(y) =
_
1
1y
2
, y (1, 1),
0, y / (1, 1).
3. Let X be a discrete random variable taking vales on the set of nonnegative integers with probability
mass function p
k
= P(X = k) with p
k
0,
+
k=0
p
k
= 1. The generating function is dened as
g(s) = E(s
X
) =
+
k=0
p
k
s
k
.
(a) Show that
EX = g
(1) and EX
2
= g
(1) + g
(1),
where the prime denotes differentiation.
(b) Calculate the generating function of the Poisson random variable with
p
k
= P(X = k) =
e
k
k!
, k = 0, 1, 2, . . . and > 0.
(c) Prove that the generating function of a sum of independent nonnegative integer valued random
variables is the product of their generating functions.
(a) We have
g
(s) =
+
k=0
kp
k
s
k1
and g
(s) =
+
k=0
k(k 1)p
k
s
k2
.
Hence
g
(1) =
+
k=0
kp
k
= EX
and
g
(1) =
+
k=0
k
2
p
k
+
k=0
kp
k
= EX
2
g
(1)
from which it follows
EX
2
= g
(1) + g
(1).
(b) We calculate
g(s) =
+
k=0
e
k
k!
s
k
= e
(s1)
.
5
(c) Consider the independent nonnegative integer valued random variables X
i
, i = 1, . . . d. Since
the X
i
s are independent, so are the random variables e
X
i
, i = 1, . . . . Consequently,
g
P
d
i=1
X
i
(s) = E(e
P
d
i=1
X
i
) =
d
i=1
E(e
X
i
) =
d
i=1
g
X
i
(s).
4. Let b R
n
and R
nn
a symmetric and positive denite matrix. Let X be the multivariate
Gaussian random variable with probability density function
(x) =
1
(2)
n/2
1
_
det()
exp
_
1
2
1
(x b), x b
_
.
(a) Show that
_
R
d
(x) dx = 1.
(b) Calculate the mean and the covariance matrix of X.
(c) Calculate the characteristic function of X.
(a) From the spectral theorem for symmetric positive denite matrices we have that there exists a
diagonal matrix with positive entries and an orthogonal matrix B such that
1
= B
T
B.
Let z = x b and y = Bz. We have
1
z, z = B
T
Bz, z
= Bz, Bz = y, y
=
d
i=1
i
y
2
i
.
Furthermore, we have that det(
1
) =
d
i=1
i
, that det() =
d
i=1
1
i
and that the Jacobian
of an orthogonal transformation is J = det(B) = 1. Hence,
_
R
d
exp
_
1
2
1
(x b), x b
_
dx =
_
R
d
exp
_
1
2
1
z, z
_
dz
=
_
R
d
exp
_
1
2
d
i=1
i
y
2
i
_
|J| dy
=
n
i=1
_
R
exp
_
1
2
i
y
2
i
_
dy
i
= (2)
n/2
n
i=1
1/2
i
= (2)
n/2
_
det(),
from which we get that
_
R
d
(x) dx = 1.
6
(b) From the above calculation we have that
(x) dx = (B
T
y +b) dy
=
1
(2)
n/2
_
det()
n
i=1
exp
_
1
2
i
y
2
i
_
dy
i
.
Consequently
EX =
_
R
d
x(x) dx
=
_
R
d
(B
T
y +b)(B
T
y +b) dy
= b
_
R
d
(B
T
y +b) dy = b.
We note that, since
1
= B
T
B, we have that = B
T
1
B. Furthermore, z = B
T
y. We
calculate
E((X
i
b
i
)(X
j
b
j
)) =
_
R
d
z
i
z
j
(z +b) dz
=
1
(2)
n/2
_
det()
_
R
d
k
B
ki
y
k
m
B
mi
y
m
exp
_
1
2
y
2
_
dy
=
1
(2)
n/2
_
det()
k,m
B
ki
B
mj
_
R
d
y
k
y
m
exp
_
1
2
y
2
_
dy
=
k,m
B
ki
B
mj
1
k
km
=
ij
.
(c) Let y be a multivariate Gaussian random variable with mean 0 and covariance I. Let also
C = B
. We have that = CC
T
= C
T
C. We have that
X = CY +b.
To see this, we rst note that Xis Gaussian since it is given through a linear transformation of a
Gaussian random variable. Furthermore,
EX = b and E((X
i
b
i
)(X
j
b
j
)) =
ij
.
Now we have:
(t) = Ee
iX,t
= e
ib,t
Ee
iCY,t
= e
ib,t
Ee
iY,C
T
t
= e
ib,t
Ee
i
P
j
(
P
k
C
jk
t
k
)y
j
= e
ib,t
e
1
2
P
j
|
P
k
C
jk
t
k|
2
= e
ib,t
e
1
2
Ct,Ct
= e
ib,t
e
1
2
t,C
T
Ct
= e
ib,t
e
1
2
t,t
.
7
Consequently,
(t) = e
ib,t
1
2
t,t
.
8