0% found this document useful (0 votes)
60 views9 pages

Tutorial 4 So LN

1. The document provides solutions to probability and statistics problems involving exponential and gamma distributions. It derives expressions for the mean, variance, and moment generating function of sums of exponential random variables. 2. It then examines the distributions of the minimum and maximum of several exponential random variables. Expressions are derived for the density, mean, and variance of these statistics. Their correlation is also computed. 3. The final problem involves gamma random variables X and Y. It shows their sum U=X+Y has a gamma distribution. The densities of U and V=X/(X+Y) are also derived and shown to be independent. Expressions for expectations of V are obtained.

Uploaded by

Bob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views9 pages

Tutorial 4 So LN

1. The document provides solutions to probability and statistics problems involving exponential and gamma distributions. It derives expressions for the mean, variance, and moment generating function of sums of exponential random variables. 2. It then examines the distributions of the minimum and maximum of several exponential random variables. Expressions are derived for the density, mean, and variance of these statistics. Their correlation is also computed. 3. The final problem involves gamma random variables X and Y. It shows their sum U=X+Y has a gamma distribution. The densities of U and V=X/(X+Y) are also derived and shown to be independent. Expressions for expectations of V are obtained.

Uploaded by

Bob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Australian School of Business

Probability and Statistics


Solutions Week 4
1. We are given X Exp() so that:
E [X] =

1
.
2

and

V ar (X) =

and

V ar (N ) = .

Also, N Poisson() so that:


E [N ] =
(a) The mean of S is:
E [S] = E [E [S |N ]] = E [N X] = E [N ] E [X] = /.
(b) The variance of S is:
V ar (S) =
=

E [N ] V ar (X) + (E [X])2 V ar (N )
 2
1
1
2+
= 2/2 .

(c) The m.g.f. of S is:


MS (t) = MN (log MX (t)) ,
where
MX (t) =

MN (t) = e(e

and

1)

Thus,
MS (t) = e




log
e ( t ) 1

= exp

t
t

2. Xk Exp(1) implies that fXk (x) = ex for x 0 and zero otherwise, for k = 1, 2, 3. We have that:

0,
if x < 0;
FXk (x) =
1 ex , if x 0,
for k = 1, 2, 3.
Let X(1) = min {X1 , X2 , X3 } and X(3) = max {X1 , X2 , X3 }. Finding the distributions of the minimum
and the maximum, we have:
3

FX(1) (x) = 1 (1 F (x)) = 1 e3x ,


for x 0 and zero otherwise. So that:
fX(1) (x) = 3e3x ,

for x 0,

and zero otherwise. This is Exp(3) and


3

FX(3) (x) = (F (x)) = 1 ex


for x 0 and zero otherwise. So that:
fX(3) (x) = 3ex 1 ex
and zero otherwise.
c Katja Ignatieva

2

3

for x 0,

School of Risk and Actuarial Studies, ASB, UNSW

Page 1 of 9

ACTL2002 & ACTL5101

Probability and Statistics

Solutions Week 4


(a) The joint distribution of X(1) , X(2) , X(3) is given by:
fX(1) ,X(2) ,X(3) (y1 , y2 , y3 )

= 3!f (y1 ) f (y2 ) f (y3 )


= 6e(y1 +y2 +y3 ) ,

for 0 y1 y2 y3 < .

and zero otherwise. Therefore, we get the joint distribution of X(1) , X(3) by integrating over all
possible values of X(2) as:
fX(1) ,X(3) (y1 , y3 )

y3

y1

= 6 e

iy3
h
6e(y1 +y2 +y3 ) dy2 = 6 e(y1 +y2 +y3 )
y1

2y1 y3

y1 2y3

and zero otherwise.

for 0 y1 y3 < ,

(b) We can show that:




E X(1) =3

=3

* using

x exp(cx)dx =



E X(3) =3

=3

Z0

exp(cx)
(cx
c2

Z


x exp(3x)dx


exp(3x)
1
(3x 1)
=
9
3
0

i
1) , and (note exp(a)b = exp(a b)):

y exp(y)(1 exp(y))2 dy
y exp(y) 2y exp(2y) + y exp(3y)dy


exp(y)
exp(2y)
exp(3y)
(y 1) 2
(2y 1) +
(3y 1)
1
4
9
0
11
.
=3 (1 1/2 + 1/9) =
6

=3

(c) We have that (note exp(a)b = exp(a b)):


h

1
E X(1) = 3
x2 exp(3x)dx
9
0


 2
1
2x
2
1
x

+
=3 exp(3x)
3
9
27 0
9
9
 2
Z
i
h



11
2
2
y 2 exp(y)(1 exp(y))2 dy
E X(3) = 3
V ar X(3) =E X(3)
6
0
 2
Z
11
=3
y 2 exp(y) 2y 2 exp(2y) + y 2 exp(3y)dy
6
0

 2


 2
y
2y
2
2y
2
y

2 exp(2y)

+
=3 exp(y)
1
1
1
2
4
8
 2
  2
y
2y
2
11
+ exp(3y)

+
3
9
27 0
6
  2

11
49
1 2

,
=3 (2) + 2
4
27
6
36
i
h
 2
R
2
.
+
** using x2 exp(cx)dx = exp(cx) xc 2x
2
3
c
c

V ar X(1) =E

c Katja Ignatieva

2
X(1)

2

School of Risk and Actuarial Studies, ASB, UNSW

Page 2 of 9

ACTL2002 & ACTL5101

Probability and Statistics

Solutions Week 4

(d) Now, for:




E X(1) X(3) =

Z0

6x

xyfX,Y (x, y)dydx =



xy 6e(x+y) ex ey dydx

ye e ye
dydx

 y

 2y

Z
e
e

=
6x e2x
dx
(y 1) ex
(2y 1)
1
4
0
x

 x

 2x

Z
e
e
=
6x e2x
(x + 1) ex
(2x + 1)
dx
1
4
0
Z
6
12
=
6x2 e3x + 6xe3x x2 e3x xe3x dx
4
4
Z0
9
=
3x2 e3x + xe3x dx
2
0




 2
9 e3x
2x
2
x
&
+

+
(3x 1)
= 3 e3x
3
9
27 0
2
9
0
32 1
13
=
+ =
.
27
2
18
i
h
R
(cx

1)
,
again * using x exp(cx)dx = exp(cx)
2
c
h
 2
i
R 2
2
** using x exp(cx)dx = exp(cx) xc 2x
+
.
2
3
c
c
Therefore, we have:




 

Cov X(1) , X(3)
= E X(1) X(3) E X(1) E X(3)
  
1
11
1
13
= .

=
18
3
6
9
0

2x

2y

Therefore, the required correlation coefficient is:


X(1) , X(3)

=
3. X Gamma(, 1) implies:
fX (x) =

x1 ex
,
()


Cov X(1) , X(3)
q


V ar X(1) V ar X(3)
1/9
2
p
= .
7
(1/9) (49/36)

for x 0 and MX (t) =

for y 0 and MY (t) =

1
1t

Similarly, Y Gamma(, 1)implies:


fY (y) =

y 1 ey
,
()

1
1t

(a) Using the m.g.f. technique, we have:


i
h
  

 
E eUt = E e(X+Y )t = E eXt E eY t

+
1
MX (t) MY (t) =
1t

MU (t) =
=

which is the m.g.f. of a Gamma( + , 1).


(b) By independence, we note that:
f (x, y) = fX (x) fY (y) =

x1 ex y 1 ey
.
()
()

The inverse of the transformation:


u=x+y
c Katja Ignatieva

and

v=

x
x+y

School of Risk and Actuarial Studies, ASB, UNSW

Page 3 of 9

ACTL2002 & ACTL5101

Probability and Statistics

Solutions Week 4

is given by:
x = uv

and

y = u uv = u (1 v) .

uy
u

uv = u y y = (v 1)u y = (1 v)u
Which is derived by: x = u y v =
x = u u(1 v) x = uv.
Its jacobian is:

h1 (u, v) /u h1 (u, v) /v
v
u
= det

J (u, v) = det
h2 (u, v) /u h2 (u, v) /v
1 v u
=

uv u(1 v) = u.

Thus |J (u, v) | = u, because 0 < u < . By the Jacobian transformation technique, the joint
density of U and V is:
1

fUV (u, v) =

euv [u (1 v)]
eu(1v)
1 (uv)
1/u
()
()

for 0 < u < and 0 < v < 1 and zero otherwise.


(c) Use euv eu(1v) = eu , than we can further simplify the joint density as:
fUV (u, v) =

1
1
+1 u
1
u
(1 v)1 .
|
{z e } |v
{z
}
() ()
{z
} function of u alone function of v alone
|
constant

Thus, we see that we can express the joint density as a product of functions of u alone and v
alone, i.e., fU,V (u, v) = fU (u)fV (v). Therefore, U and V are independent.
(d) Note x, y 0, thus 0

X
X+Y

fU (u) =

X
X

= 1. For the marginal of U , we have

u+1 eu v 1 (1 v)
dv
() ()
0
Z
u+1 eu 1 ( + ) 1
1
v
(1 v)
dv
( + ) 0 () ()
{z
}
|

density of a Beta(,) =1

(+)1 u

e
,
( + )

for u > 0

and zero otherwise. This is the density of a Gamma( + , 1). This reinforces the result in (a).
Note: 0 X + Y . For the marginal of V , we have:
fV (v)

u+1 eu v 1 (1 v)
() ()

du
Z

( + ) 1
1
v
(1 v)
() ()

|0

u+1 eu
du
( + )
{z
}

density of a Gamma(+,1) =1

( + ) 1
1
v
(1 v)
,
() ()

for 0 < v < 1

and zero otherwise. This is the density of a Beta(, ).


(e) Since X = U V and by independence, we have:
E [X] = E [U ] E [V ]

= ( + ) E [V ]

so that
E [V ] =

.
+

Similarly, we have for the variance:




2
V ar (X) = V ar (U V ) = E U 2 V 2 (E [U V ])
   
= E U 2 E V 2 2
c Katja Ignatieva

School of Risk and Actuarial Studies, ASB, UNSW

Page 4 of 9

ACTL2002 & ACTL5101

Probability and Statistics

so that

Thus, the variance of V is:

Solutions Week 4

  + 2
+ 2
=
.
E V2 =
E [U 2 ]
( + ) (1 + + )
 
E V 2 (E [V ])2

V ar (V ) =

2

+ 2

( + ) (1 + + )
+
( + ) ( + 2 ) 2 (1 + + )
( + )2 (1 + + )

=
=
=

( + ) (1 + + )

4. Xk Exp(1) implies that fXk (x) = ex for x 0 and zero otherwise, for k = 1, 2, 3. We have that:

0,
if x < 0;
FXk (x) =
1 ex , if x 0,
for k = 1, 2, 3.
Let X(1) = min {X1 , X2 , X3 } and X(3) = max {X1 , X2 , X3 }. Finding the distributions of the minimum
and the maximum, we have:
FX(1) (x) = 1 (1 F (x))3 = 1 e3x ,
for x 0 and zero otherwise. So that:
fX(1) (x) = 3e3x ,

for x 0,

and zero otherwise. This is Exp(3) and


3

FX(3) (x) = (F (x)) = 1 ex


for x 0 and zero otherwise. So that:
fX(3) (x) = 3ex 1 ex

2

3

for x 0,


and zero otherwise. The joint distribution of X(1) , X(2) , X(3) is given by:
fX(1) ,X(2) ,X(3) (y1 , y2 , y3 ) =

3!f (y1 ) f (y2 ) f (y3 )


6e(y1 +y2 +y3 ) ,

for 0 y1 y2 y3 < .

and zero otherwise. Therefore, we get the joint distribution of X(1) , X(3) by integrating over all
possible values of X(2) as:
fX(1) ,X(3) (y1 , y3 ) =
=

y3

iy3
h
6e(y1 +y2 +y3 ) dy2 = 6 e(y1 +y2 +y3 )
y1

y1

6 e

2y1 y3

y1 2y3

and zero otherwise.

for 0 y1 y3 < ,


(a) First, we find the conditional density of X(3) X(1) and by definition,
fX(3) |X(1) (y3 |y1 ) =

fX(1) X(3) (y1 , y3 )


fX(1) (y1 )


6 e2y1 y3 ey1 2y3
,
3e3y1

for 0 y1 y3 < ,

and zero otherwise.


Replacing y1 = x and y3 = y, we have:


fX(3) |X(1) (y |x ) = 2 e(xy) e2(xy) , for 0 x y < ,
c Katja Ignatieva

School of Risk and Actuarial Studies, ASB, UNSW

Page 5 of 9

ACTL2002 & ACTL5101

Probability and Statistics

and zero otherwise. Thus,





E X(3) X(1) = x
=

2ex (x + 1)ex 2e2x

yecy dy =

 ecy
2



2y e(xy) e2(xy) dy
x
 2y

 y

2x e
x
(2y 1)
2e e (y 1) x 2e
4
x

* using

Solutions Week 4

e2x
(2x + 1)
4

3
x+ ,
2

=

(cy 1) .

c

(b) Similarly, we can find the conditional density of X(1) X(3) as:

2 e2y e(y+x)
,
for 0 y x < ,
fX(1) |X(3) (y |x ) =
2
(1 ex )

and zero otherwise. Thus,




E X(1) X(3) = x


2 e2y e(y+x)

dy
2
(1 ex )

 2x
x
 y
x
e
2
x

(2y 1) e
e (y 1) 0
(1 ex )2
4
0
0

2xe2y e2y + 1 + 4xe2x + 4e2x 4ex


2(1 ex )2

 2y
2
1
e
2x
x
=

(2x 1) + + e
(x + 1) e
(1 ex )2
4
4

1 4ex + 3e2x + 2xe2x
=
,
2
2 (1 ex )

(cx 1) .
=

R
 cx
* using xecx dx = ec2
(c) Already derived earlier.
(d) We use Jacobian transformation by first letting:
R = X(3) X(1)

and

S = X(1)

with the inverse transformation:


X(1) = S

and

X(3) = R + S.

The Jacobian of this transformation is given by:

S /R
S /S
0 1
= det
= 1.
J (R, S) = det
(R + S) /R (R + S) /S
1 1
Thus, |J (R, S) | = 1 and

fR,S (r, s) =
=



6 e2srs es2(r+s)

for 0 s < r + s < ,
6e3sr 1 er ,

and zero otherwise, where the range is equivalently:


0s<

and

0 r < .

Thus, the marginal density of R is obtained by integrating all possible values of S:


Z

6e3sr 1 er ds
fR (r) =
0



1 3sr
= 6 1 er
e
3
0
r

e
r

= 6 1e
3
for 0 r < ,
= 2er 1 er ,

and zero otherwise.


c Katja Ignatieva

School of Risk and Actuarial Studies, ASB, UNSW

Page 6 of 9

ACTL2002 & ACTL5101

Probability and Statistics

Solutions Week 4


5. Use the m.g.f. technique. Recall that if X N , 2 , then
1

MX (t) = et+ 2

2 2

(a) Let S = X1 + X2 .
i
h




E e(X1 +X2 )t = E eX1 t E eX2 t

MS (t) =

1 2

1 2

e 2 t e 2 t = e 2 (2)t

=
which is the m.g.f. of a N (0, 2).
(b) Let D = X1 X2 .
MD (t) =
=

i
i
h
 h

E e(X1 X2 )t = E eX1 t E eX2 (t)
1 2

e 2 t e 2 (t) = e 2 (2)t

which is the m.g.f. of a N (0, 2). Thus, D has the same distribution as S.
(c) Now, assume that they are no longer independent and has the bivariate normal density:



1
1
2
2
.
exp
x

2x
x
+
x
fX1 ,X2 (x1 , x2 ) = p
1
2
2
2 (1 2 ) 1
2 1 2

Using Jacobian transformation technique, we find the joint distribution of S and D. From
S = X1 + X2

and

D = X1 X2

and

X2 =

the inverse of this transformation is


X1 =

1
(S + D)
2

1
(S D) ,
2

X1 = S SD
= S+D
which is derived by X1 = S X2 D = S X2 X2 X2 = SD
2
2
2 .
Its Jacobian is:

(S + D)/2 /S (S + D)/2 /D
1/2 1/2
= det
= 1/41/4 = 1/2.
J (S, D) = det
(S D)/2 /S (S D)/2 /D
1/2 1/2
Thus |J (S, D) | = 1/2. Therefore,
fS,D (s, d) =
=
=
=

"
2
 #!
1
1
1
(s + d) 2 21 (s + d)
2
p

2

exp
2
1
1
2
2 (1 )
|2|
2 (s d) + 2 (s d)
2 1




1
1
p
exp
s2 + d2 2 s2 d2 + s2 + d2
2)
2
8
(1

4 1



1
1
2
2
p
exp
(1

)
s
+
(1
+
)
d
4 (1 2 )
4 1 2




s2
d2
1
p
exp
exp
4 (1 + )
4 (1 )
4 1 2
1

Therefore, clearly we can write the density as a product of functions of s alone and d alone. S
and D are therefore independent.
(d) We have that the p.d.f. of X is given by:
fX (x) =

x1 ex ,
()

if x > 0,

and zero otherwise.


1. The transformation g(X) = 1/X is a monotonic decreasing function for x > 0, because
1
dx
dg(x)
2
< 0 for x > 0. Hence, we can apply the CDF technique, with
d x = d x = x

c Katja Ignatieva

School of Risk and Actuarial Studies, ASB, UNSW

Page 7 of 9

ACTL2002 & ACTL5101

Probability and Statistics

g(Y ) = 1/X, g 1 (Y ) = 1/Y , and


g() = 0 we have:

g1 (Y )
y

y1
y

fY (y) = fX (g 1 (y))

Solutions Week 4

= y 2 < 0, support of Y : g(0) = ,

1
g (y)
y

 1


1
=

e y y 2
()
y


(y)+1 e y y 2
=
()

y 1 e y
=
()
for y > 0 and zero otherwise.
2. The c.d.f. of the inverse gamma distribution, as function of the c.d.f. of the gamma distribution, is given by applying the CDF technique:
FY (y) =1 FX (g 1 (y))
=1 FX (1/y).
6.

I. C
A t-distribution is obtained by a standard normal r.v. divided by the square root of a chi-squared
r.v. divided by its degree of freedom.
2
N (0, 1) (see lecture notes).
We have Z1 + Z2 N (0, 2), i.e., Z1+Z
2
ri /2

For a chi-squared distribution we have the m.g.f.: MVi (t) = (1 2 t)


r1 /2

MV1 (t) MV2 (t) = (1 2 t)

(1 2 t)

(r1 +r2 )/2

= (1 2 t)

for i = 1, 2. Hence,

r2 /2

which is the m.g.f. of a chi-squared distribution with r1 + r2 degrees of freedom. Hence, V1 + V2


has a chi-squared distribution with r1 + r2 degrees of freedom.
II. E
See lecture notes/ previous question.
III. C
We have:
Z1 + Z2 N (0, V ar(Z1 ) + V ar(Z2 ) + 2Cov(Z1 , Z2 ))
N (0, 1 + 1 + 2 1 1)
N (0, 2 + 2)
N (0, 2 (1 + )) .
Thus V ar

Z1
+Z2
2

2(1+)
2

= 1 + 6= 1.

IV. D
1
We have MXk (t) = (1 t/)
for k = 1, . . . , n. Let Yk = Xk /n, then we have: MYk (t) =

t 1
MXk /n (t) = MXk (t/n) = 1 n
for k = 1, . . . , n.
Using the m.g.f. technique we determine the distribution of the sample mean by the m.g.f.:
MX (t) =MY1 (t) . . . MYn (t)
n

t
= 1
n
which is the m.g.f. of a Gamma distribution with parameters n and n.
V D
Use the m.g.f. technique. MXk (t) = exp((exp(t) 1)) for k = 1, . . . , n. We have:
MS (t) =MX1 (t) . . . MXn (t)
Y
=
n exp((exp(t) 1))
k=1

= exp((exp(t) 1))n
= exp(n (exp(t) 1)),

which is the m.g.f. of a Poisson r.v. with mean n.


c Katja Ignatieva

School of Risk and Actuarial Studies, ASB, UNSW

Page 8 of 9

ACTL2002 & ACTL5101

Probability and Statistics

Solutions Week 4

VI. B
We have:


Pr X(20) > 20 =1 Pr X(20) 1
20

=1 (FX (1))

20

=1 (1 exp(2))

VII. D
We have:
FX (x) =

Let U = X(n) , then:

if x 0;
0,
Rx
 2 x
2
2xdx
=
x
=
x
,
if 0 < x < 1;
fX (x)dx =
0
0
1,
if x 1.
n1

fU (u) =n fX (u) (FX (u))


=n 2u u2(n1) ,
for u [0, 1] and zero otherwise.
Thus we have:
Z
E [U ] =

=2n

ufU (u)du =

un 2u u2(n1) du

u2n du

u2n+1
2n + 1
2n
=
.
2n + 1
=2n

1
0

VIII. E
We have that X U (8.5, 10.5), then fX (x) =
have:

0,
x8.5
,
FX (x) =
2
1,

1/2 if x [8.5, 10.5] and zero otherwise and we


if x < 8.5;
if 8.5 x 10.5;
if x > 10.5.



Then we have: Pr(loser will not break world record) = Pr X(8) 9.9 = 1 Pr X(8) < 9.9 =
1 FX (9.9)8 = 1 0.78 .
-End of Week 4 Tutorial Solutions-

c Katja Ignatieva

School of Risk and Actuarial Studies, ASB, UNSW

Page 9 of 9

You might also like