0% found this document useful (0 votes)
64 views10 pages

Tutorial 5 So LN

This document contains solutions to probability and statistics problems from Week 5. It discusses: 1) Applying the central limit theorem to the sum of exponential random variables. 2) Finding the maximum likelihood estimator for a parameter using the method of moments. 3) Proving convergence in probability using Chebyshev's inequality. 4) Finding the distribution of a random walker's location after 1 hour using the central limit theorem.

Uploaded by

Bob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views10 pages

Tutorial 5 So LN

This document contains solutions to probability and statistics problems from Week 5. It discusses: 1) Applying the central limit theorem to the sum of exponential random variables. 2) Finding the maximum likelihood estimator for a parameter using the method of moments. 3) Proving convergence in probability using Chebyshev's inequality. 4) Finding the distribution of a random walker's location after 1 hour using the central limit theorem.

Uploaded by

Bob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Australian School of Business

Probability and Statistics


Solutions Week 5
2

1. We are given that X Exp(1/5000). Thus, E [X] = 5000 and V ar (X) = (5000) . Let S = X1 + . . . +
2
X100 . Then E [S] = 100 (5000) = 500, 000 and V ar (S) = 100 (5000) .Thus, using the central limit
theorem, we have:
!
100 (50)
S E (S)
Pr (S > 100 (5050)) = Pr p
>
10 (5000)
V ar (S)

Pr (Z > 0.10) = 1 0.5398 = 0.4602.

2. To find an estimator for using the method of moments, let E [X] = X. We then have:
Z
fX (x)dx
X = E [X] =

2
2

2 ( x)
dx
2

=
=
=
Hence, the method moments estimate is:
p


x x2 dx


 2
x
x3
2

2
2
3 0

 2
2

2
2
3

.
3

b = 3X.

3. To prove X n in probability, we show that if we take any > 0, we must have:





Pr X n > 0,
as n
or, equivalently;




lim Pr X n > = 0.

First, note that we have:

 
E Xn =

Applying the Chebyshevs inequality:

and

n

1 X 2
V ar X n = 2
k .
n
k=1

n



1 1 X 2
k .
Pr X n > 2 2
n
k=1

And take the limits on both sides:





lim Pr X n >
n

n
1 1 X 2

k
n 2 n2

lim

k=1
n

1
1 X 2
lim 2
k = 0.
2
n n
k=1
{z
}
|
=0

c Katja Ignatieva

School of Risk and Actuarial Studies, ASB, UNSW

Page 1 of 10

ACTL2002 & ACTL5101

Probability and Statistics

Solutions Week 5

Thus, the result follows.


4. Let L be the location after one hour (or 60 minutes). Therefore:
L = X1 + . . . + X60 ,
where
Xk =

50 cm,
w.p.
50 cm, w.p.

1
2
1
2,

so that E [Xk ] = 0 and V ar (Xk ) = 2500.


Therefore,
E [S] = 0 and V ar (S) = 60 (2500) = 150000.
Thus, using the central limit theorem, we have:

L E [L]
x
p

150000
V ar (L)

Pr (L x) = Pr
In other words,


Pr Z

100 15

L N (0, 150000)

approximately. The mean of a normal is also the mode, therefore its most likely position after one
hour is 0, the point where he started with.
5. Consider N independent random variables each having a binomial distribution with parameters n = 3
nk
and so that Pr (Xi = k) = k3 k (1 )
, for i = 1, 2, . . . , N and k = 0, 1, 2, 3. Assume that of
these N random variables n0 take the value 0, n1 take the value 1, n2 take the value 2, and n3 take
the value 3 with N = n0 + n1 + n2 + n3 .
(a) The likelihood function is given by:
L (; x) =

n
Y

fX (xi )

i=1

n1  
n2   n3
n0  
 
3 2
3 3
3
3
2
3
.

(1 )

(1 )
(1 )
2
3
1
0

The log-likelihood function is given by:


(; x) = log (L(; x)) =

n
X

log (fX (xi ))

i=1

  

  

3
3
=n0 log
+ 3 log (1 ) + n1 log
+ log() + 2 log (1 )
0
1
  

  

3
3
+ n2 log
+ 2 log() + log (1 ) + n3 log
+ 3 log() ,
2
3

* using log(a b) = log(a) + log(b) and log(ac b) = c log(a) + log(b)


Then, take the FOC of (; x):
(; x)

3n0
n1
2n1
2n2
n2
3n3
+

+
(1 )

(1 )

(1 )

n1 + 2n2 + 3n3
3n0 + 2n1 + n2

(1 )

Equating this to zero we obtain:


3n0 + 2n1 + n2
n1 + 2n2 + 3n3

= 0,

(1 )
or, equivalently:

(n1 + 2n2 + 3n3 ) (1 ) = (3n0 + 2n1 + n2 ) .

Thus we have the maximum likelihood estimator for is:


(n1 + 2n2 + 3n3 )

b =
(n1 + 2n2 + 3n3 ) + (3n0 + 2n1 + n2 )
(n1 + 2n2 + 3n3 )
=
(3n0 + 3n1 + 3n2 + 3n3 )
(n1 + 2n2 + 3n3 )
=
,
3N
* using:
c Katja Ignatieva

a
1a

b
c

1
1/a1

b
c

1
a

1=

c
b

1
a

c+b
b

a=

b
b+c .

School of Risk and Actuarial Studies, ASB, UNSW

Page 2 of 10

ACTL2002 & ACTL5101

Probability and Statistics

Solutions Week 5

(b) We have:
N = 20,

n0 = 11,

n1 = 7,

n2 = 2,

n3 = 0.

Thus the ML estimate for is given by:


(n1 + 2n2 + 3n3 )
3N
11
7+4
=
=
60
60
= 0.1833.

b =

Thus, the probability of winning any single bet is given by 0.1833.


6. (a) The likelihood function is given by:
L(; y, A) =

n
Y

fY (yi ) =

i=1
Qn
A
= Qni=1 +1
i=1 yi
n n

n
Y
A
y +1
i=1 i

A
= Qn
+1
( i=1 yi )
n An
=
n(+1)
Qn
1/n
i=1 yi
=

n An
Gn(+1)

(b) In the lecture we have seen that:


(|y; A) =f|Y (|y; A)

=R

fY | (y|; A)()

fY | (y|; A)()d

fY | (y|; A)()

fY | (y; A)

fY | (y|; A)()

i )Pr(Ai )
, where the set Ai (= ()) i = 1, . . . , n is
* using Bayes formulae: Pr(Ai |B) = PPr(B|A
j Pr(B|Aj )Pr(Aj )
a complete partition of the sample space.
** using the law of total probability: Pr(A) = Pr(A|Bi ) Pr(Bi ) if Bi (= ()) i = 1, . . . , n is a
complete partition of the sample space.
*** using that fY | (y; A) is, given the data, a known constant.

c Katja Ignatieva

School of Risk and Actuarial Studies, ASB, UNSW

Page 3 of 10

ACTL2002 & ACTL5101

Probability and Statistics

Solutions Week 5

(c) We have that the posterior density is given by:


(|y; A) =f|Y (|y; A)
fY | (y|; A)()
n
Y

fY (yi ; A)
=()
i=1

=L(; y, A) ()
1
L(; y, A)

n1 An
= n(+1)
G
 n
1
A
n1
=

n
G
G
 n
G
1
=n1
n
A
G
 n
G
n1

A
 n !!
G
n1
=
exp log
A

 
G
n1
=
exp n log
A
=n1 exp (na)

* using independence between all fY | (yi |; A) and fY | (yj |; A) for i 6= j


* using (G1n ) is a known constant.
(d) We have that (|y; A) n1 exp (na) or, equivalently, there exist some constant c for
which (|y; A) = c n1 exp (na). we need to determine the constant c. We know that
R
(|y; A)d = 1, because otherwise it is not a posterior density.

Given this observation, we are going to compare cn1 exp (na) with the p.d.f. of X Gamma(x , x ),
which is given by:
x
fX (x) = x xx 1 ex x .
(x )
1
. Then we have the density of a
Now, substitute x = , x = n, x = an, and c = (1 x ) = (n)
Gamma(n, an) distribution. Hence, the posterior density is given by:

(|y; A) =

(an)n
n1 ean ,
(n)

for 0 < < ,

and zero otherwise.


(e) The Bayesian estimator of is the expected value of the posterior. The posterior has a Z Gamma(n, an)
n
. Thus:
distribution. We have that E [Z] = na


1
n

bB = E (|y; A) =
= .
na
a

Thus the Bayesian estimator of is

1
a.

7. We use moment generating function to show that:


(a) The binomial tends to the Poisson: Let X Binomial(n, p). Its m.g.f. is therefore:
n
MX (t) = 1 p + pet
let np = so that p = /n
n


=
1 + et
n n
n

(et 1)
=
1+
n
c Katja Ignatieva

School of Risk and Actuarial Studies, ASB, UNSW

Page 4 of 10

ACTL2002 & ACTL5101

Probability and Statistics

Solutions Week 5

and by taking limit on both sides, we have:



n

(et 1)
lim MX (t) = lim 1 +
= exp et 1 ,
n
n
n

which is the moment generating function of a Poisson with mean .

(b) The gamma, properly standardized, tends to Normal: Let X Gamma(, ) so that its density
is of the form:
1 x
f (x) =
x
e
,
for x 0,
()
and zero otherwise, and its m.g.f. is:



.
MX (t) =
t
Its mean and variance are, respectively, / and / 2 . These results have been derived in lecture
week 2. Consider the standardized Gamma random variable:
X E (X)
X
X /
X
Y = p
=
= p
=
2

V ar (X)
/

Its moment generating function is:


MY (t) =
=
=

E e


X
t

=e

MX

= e t e log(1(t/ ))
(t/ )



2
 1

t/ + R
exp t t/
2

exp

here R is the Taylors series remainder term

1 2
t +R ,
2

where R involves powers of 1/ .. Thus in the limit, MY (t) exp

1 2
2t

as .

8. If the law of large numbers were to hold here, it would have had the sample mean X approaching the
mean of X, which does not exist in this case. At first glance therefore it would seem not a violation.
But, in fact, it is, because the assumption of finite mean does not hold for Cauchy and therefore the
law of large numbers cannot hold.
9. Given that there are n realizations of xi ,where i = 1, 2, . . . , n. We know that xi |p Ber(p) and
p U (0, 1). We are asked to find the Bayesian estimators for p and p(1 p). Since n random variables
are independent, then:
f (x1 , x2 , . . . , xn |p) =

n
Y

f (xi |p)

i=1
Pn

=p

i=1

xi

Pn

xi

Pn

xi

(1 p)n

i=1

Since xi s are independent with random variable p, then


f (x1 , x2 , . . . , xn , p) = p

Pn

i=1

xi

(1 p)n

i=1

Then we can compute the joint density for xi where i = 1, 2, . . . , n,


Z 1 P
Pn
n
p i=1 xi (1 p)n i=1 xi dp
f (x1 , x2 , . . . , xn ) =
0
Pn
Pn
( i=1 xi + 1)(n i=1 xi + 1)
=
.
(n + 2)
(a) Method 1: Hence we can obtain the posterior function:
f (x1 , x2 , . . . , xn , p)
f (x1 , x2 , . . . , xn )
Pn
Pn
(n + 2)
Pn
p i=1 xi (1 p)n i=1 xi ,
= Pn
( i=1 xi + 1)(n + 1 i=1 xi )

f (p|x1 , x2 , . . . , xn ) =

c Katja Ignatieva

School of Risk and Actuarial Studies, ASB, UNSW

Page 5 of 10

ACTL2002 & ACTL5101

Probability and Statistics

Solutions Week 5

which P
is the probability density
Pnfunction for:
n
Beta(( i=1 xi + 1) , (n + 1 i=1 xi )). Method 2: Observe that the difference between f (x1 , x2 , . . . , xn )
and the p.d.f. in of a Beta distribution are proportional to each other and use this to find the
distribution of f (p|x1 , x2 , . . . , xn ).
Hence, we have f P
(p|x1 , x2 , . . . , xn ) fY (x),
P
where Y Beta(( ni=1 xi + 1) , (n + 1 ni=1 xi )).
The Bayesian estimator for p will thus be:
Pn
xi + 1
B
pb = E [p|X] = i=1
.
n+2
(See Formulae and Tables page 13).

(b) Now we wish to find a Bayesian estimator for p(1 p). Then using the similar idea:
B

\
(p(1
p)) =E [p(1 p)|X]
Z 1
=
p(1 p)f (p|x1 , x2 , . . . , xn )dp
0

(n + 2)
Pn
= Pn
( i=1 xi + 1)(n + 1 i=1 xi )

Pn

Pn

p1+ i=1 xi (1 p)n+1 i=1 xi dp


0
Pn
Pn
( i=1 xi + 2)(n i=1 xi + 2)
(n + 2)

Pn
= Pn
(n + 4)
( i=1 xi + 1)(n + 1 i=1 xi )
(n
+
2)

Pn

= Pn
( i=1 xi + 1)(n + 1 i=1 xi )
Pn
Pn
Pn
Pn
(( i=1 xi + 1) ( i=1 xi + 1)) ((n i=1 xi + 1) (n i=1 xi + 1))
(n + 3) (n + 2) (n + 2)
Pn
Pn
( i=1 xi + 1)(n + 1 i=1 xi )
.
=
(n + 3)(n + 2)

R 1 1
Pn
(1 x)1 dx, where = i=1 xi + 2,
* using Beta function: B(, ) = ()()
(+) = 0 x
Pn
= n + i=1 xi + 2, + = n + 4.
** using Gamma function: () = ( 1) ( 1).
Alternatively, using first to moments of the beta distribution (see Formulae and Tables page 13)
we have:
B

\
(p(1
p)) = E [p(1 p)|X]


= E [p|X] E p2 |X
Pn
xi + 1 (a + b) (a + 2)

= i=1

n+2
(a) (a + b + 2)
Pn
x

1
(a + 1) a
i

= i=1
n2
(a + b + 1)(a + b)
P
P
( ni=1 xi + 1)(n + 1 ni=1 xi )
,
=
(n + 3)(n + 2)
P
P
* where a = ni=1 xi + 1 and b = n + 1 ni=1 xi

(c) We are interested in the Bayesian estimator of p(1 p), since np(1 p) is the variance of the
binomial distribution (with n a known constant) and we can use this for the normal approximation.
10. The common distribution function is given by:
Z x
x

u(+1) du = u 1 = 1 x ,
FX (x) =

if x > 1,

and zero otherwise. The distribution function of Yn will be:




1
X

x
FYn (x) = Pr (Yn x) = Pr
(n)
n1/
n




 n 
x
1/
1/
,
= Pr X(n) n x = 1 n x
= 1
n

c Katja Ignatieva

School of Risk and Actuarial Studies, ASB, UNSW

Page 6 of 10

ACTL2002 & ACTL5101

Probability and Statistics

Solutions Week 5

if x > 1 and zero otherwise. Notice that whereas x > 1, due to the transformation Yn =
i.e., when is close to zero n1/ is large! Taking the limit as n , we have:
n


x
= exp x .
lim FYn (x) = lim 1
n
n
n

X(n)
n1/

y > 0,

Thus, limit exists and therefore converges in distribution. The limiting distribution is:

FYn (y) = exp y , for y > 0,
and zero otherwise, the corresponding density is:
fYn (y) =


FYn (y)
= y (1) exp y ,
y

if y > 0,

and zero otherwise. You can prove that this is a legitimate


density by fYn (y) 0 for all y, because
R
> 0, y +1 0 and exp (y ) 0 and FYn () = fYn (y)dy = exp(0) = 1.

11. The mean and the variance of S are respectively:


E [S] =

40
3

and

V ar (S) =

10
.
9

Thus, using the central limit theorem, we have:


!
S E [S]
10 (40/3)
Pr (S 10) = Pr p
p
V ar (S)
10/9


Pr Z 10 = Pr (Z 3.16) = 0.0008.
12. Note that X can be interpreted as a geometric random variable where k is the total number of trials.
Here E [X] = p1 .
(a) The method of moments estimator is given by:
X

1
p
1
X

pe =

n
n
P
Xi

i=1

(b) The likelihood function is:


L(p; x) =

n
Y

fX (xi ) =

i=1
n

Pn

= p (1 p)

n
Y

p(1 p)xi 1

i=1
xi n

i=1

The log-likelihood function is:


(p; x) = log (L(p; x)) =

n
X

log(fX (xi )) = n log(p) +

n
X
i=1

i=1

xi n

log(1 p).

Take the FOC of (p; x) wrt p and equate equal to zero:


Pn
xi n
n

(p) = i=1
= 0.
p
1p
The we obtain the Maximum Likelihood estimator for p:

* using:
c Katja Ignatieva

a
1a

b
c

1
1/a1

b
c

n
n
= Pn
,
X

n
+
n
i
i=1
i=1 Xi

pb = Pn

1
a

1=

c
b

1
a

c+b
b

a=

b
b+c .

School of Risk and Actuarial Studies, ASB, UNSW

Page 7 of 10

ACTL2002 & ACTL5101

Probability and Statistics

Solutions Week 5

13. For the Pareto distribution with parameters x0 and we have the following p.d.f.:

f (x) = (x0 ) x1 ,

x x0 , > 1,

and zero otherwise. The expected value of the random variable X is then given by:
Z
Z
x (x0 ) x1 dx
E [X] =
xfX (x)dx =

(x0 )

(x0 )

R
x0

x
1

x0
1

x0 .
1

=
=
(a) Given x0 , we have E [X] =

x0

1 x0 ,

dx


x0

thus:

x0 =X
1
x0 =X ( 1)
x0 =X X


X = X x0
b =

Thus the method of moment estimator of is

X
.
X x0

X
.
Xx0

(b) The likelihood function is given by:


n
Y

L(; x) =

fX (xi ) =

n
Y

(x0 ) xi1

i=1

i=1

n
Y
n
xi1 .
= n (x0 )
i=1

The log-likelihood function is given by:


(; x) = log(L(; x)) =

n
X

log(fX (xi ))

i=1

=n log() + n log (x0 ) ( + 1)

n
X

log(xi ).

i=1

Take the FOC of (; x) and equate equal to zero:


n

X
() n
log(xi ) = 0
= + n log (x0 )

i=1
n
X
n
log(xi )
= n log (x0 ) +

i=1
n
.
b =
n
P
log(xi )
n log (x0 ) +

i=1

Thus, the maximum likelihood estimator for is given by

n log(x0 )+

n
P

.
log(xi )

i=1

14. The p.d.f. of a chi-squared distribution with one degree of freedom:


fY (y) =
c Katja Ignatieva

exp(y/2)

,
2y

if y > 0,

School of Risk and Actuarial Studies, ASB, UNSW

Page 8 of 10

ACTL2002 & ACTL5101

Probability and Statistics

Solutions Week 5

and zero otherwise. We need to prove that the moment generating function of Y is given by:
MY (t) = (1 2t)1/2 .
p
p
Using the transformation x = 2 y(t 1/2) and thus dy = y 1/2 /2 2 (t 1/2)dx we have:
Z
Z
exp(y/2)
ty
dy
MY (t) =
e fY (y)dy =
exp(ty)
2y

0
Z
exp(y (t 1/2))

=
dy
2y
0
Z
2
exp(x2 /2)

dx

=p
2
2 (t 1/2) 0
2
1

=p

2 (t 1/2) 2
1
= (2 (t + 1/2))0.5 = (1 2t)0.5
=p
2 (t 1/2)
R
2
/2)

* using that 0 exp(x


dx is the integral of the p.d.f. of a standard normal distributed random variable
2
over the positive values of x. Due to the symmetry property of the standard normal distribution in 0,
we have that this integral equals 1/2.
15. We need to prove that:
d

tn1 N (0, 1) as n .
This implies that a tdistribution converges in distribution to a standard normal distribution as
n . Here we cannot use the moment generating function, because it is not defined for a student-t
distribution. Note that the definition of convergence in distribution is:
Xn converges in distribution to the random variable X as n if and only if, for every x:
FXn (x) FX (x) as n .
This implies that one can use the cumulative density function of the student-t distribution and the
standard normal distribution to prove the convergence. However, these do not have a closed form
expression. Therefore, we will prove that the probability density function of a studentt distribution
is the same as the standard normal one when n . When the probability density function converges,
also the cumulative density function must converge.
We have:

(n+1)/2

n+1
x2
1
2
lim ft|n (x) = lim
1+

n
n n
n
n
2
r


(n+1)/2
1
x2
n

= lim

1+
n
2
n
n

n/21/2
2
1
x /2
= lim 1 +
n 2
n/2
1
1
1
= lim 
n/2 q
2 /2
n 2
2
/2
1 + xn/2
1 + xn/2
1
1
1
= 1/2x2 lim q
2
n
e
2
1 + x /2

n/2

2
1
= e1/2x ,
2

which is the probability density function of a standard normal random variable, * using lim
n

pn
a n
a
r 1
=
1.
,
**
using
e
=
lim
,
and
***
using
lim
1
+
2
n
2
n

1+

( n+1
2 )
( n
2)

x /2
n/2

16. i) Define transformations:


F =

c Katja Ignatieva

U/n1
V /n2

G = V.

School of Risk and Actuarial Studies, ASB, UNSW

Page 9 of 10

ACTL2002 & ACTL5101

Probability and Statistics

Solutions Week 5

ii) Determine the inverse of the transformations:


V =G

U = n1 F V /n2 = n1 F G/n2 .

iii) Calculate the absolute value of the Jacobian:



0
J = det
g nn21

1
f nn12

=g

n1
.
n2

iv) Determine the joint probability density function of F and G:


fF G (f, g) =

1
1
fUV (u, v) =
fU (u) fV (v)
|J|
|J|

v (n2 2)/2
n1 g
u(n1 2)/2
exp(u/2) n /2
exp(v/2)
n /2
n2 2 1 (n1 /2)
2 2 (n2 /2)


(n1 2)/2
g (n2 2)/2
f n1 g
n1 g (f n1 g/n2 )
=

exp

exp (g/2)
n2
2n2
2n1 /2 (n1 /2)
2n2 /2 (n2 /2)

 
(g)(n1 +n2 2)/2
1
1 f n1

= n1 (f n1 )(n1 2)/2 n /2
n /2
exp g
+
2
1
n
/2
2
2n
2
(n
1
2
2 /2)
n2
2
(n1 /2)
=

* using independence between U and V , ** using inverse transformation, determined in step ii), and
*** using exp(ga) exp(gb) = exp(g(a + b)) and ab ac = ab+c .
v) Calculate the marginal distribution of F by integrating over the other variable:
Z
fF (f ) =
fF G (f, g)dg
0
 

Z
1 f n1
1
(f n1 )(n1 2)/2
(n1 +n2 2)/2
g
exp g

= n /2
n1 n /2
dg
+
2 2n2
2 2 (n2 /2)
n2 1 2n1 /2 (n1 /2) 0
(n1 +n2 2)/2

2n2
2n2
(f n1 )(n1 2)/2
1

n1 n /2
= n /2
1
2
n2 + f n1
n2 + f n1
2
(n2 /2)
2n1 /2 (n1 /2)
n2
Z

x(n1 +n2 2)/2 exp (x) dx


0

(f n1 )(n1 2)/2
1
n1 n /2

n
/2
2 2 (n2 /2)
n2 1 2n1 /2 (n1 /2)
((n1 + n2 )/2)

1
2(n1 +n2 )/2

(n1 )/2

f (n1 2)/2 n1

(n1 )/2

n /2
n2 1

f (n1 2)/2 n1

n /2
n2 1

(n1 )/2

=f (n1 2)/2 n1

n2
n2 + f n1

(n2 )/2

n2

2n2
n2 + f n1

(n1 +n2 )/2

2n2
n2 + f n1

(n1 +n2 2)/2

(n1 +n2 )/2

2n2
n2 + f n1

((n1 + n2 )/2)
(n1 /2) (n2 /2)

((n1 + n2 )/2)
(n1 /2) (n2 /2)

(n2 + f n1 )(n1 +n2 )/2

((n1 + n2 )/2)
(n1 /2) (n2 /2)

f n1 /21
((n1 + n2 )/2)

(n1 /2) (n2 /2) (n2 + f n1 )(n1 +n2 )/2





1
2
x
and
we
have
dx
=
* using transformation x = 21 + f2nn21 g and thus g = n22n
+f n1
2 +
R 1
and ** using () = 0 x
exp(x)dx.
n /2

=n1 1

n /2

n2 2

f n1
2n2

dg,

-End of Week 5 Tutorial Solutions-

c Katja Ignatieva

School of Risk and Actuarial Studies, ASB, UNSW

Page 10 of 10

You might also like