0% found this document useful (0 votes)
87 views8 pages

Chapter 16: Introduction To Bayesian Methods of Inference: P P y N P y P P y L N P y F

This document provides an overview of Chapter 16 on Bayesian methods of inference. It includes examples of calculating posterior distributions and Bayes estimators for binomial, beta-binomial, and gamma-Poisson models. For the gamma-Poisson model, it shows that the posterior distribution is also gamma and the Bayes estimator is a weighted average of the MLE and prior mean. It discusses how the Bayes estimator is asymptotically unbiased and consistent for the gamma-Poisson model.

Uploaded by

Jeans Carlos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views8 pages

Chapter 16: Introduction To Bayesian Methods of Inference: P P y N P y P P y L N P y F

This document provides an overview of Chapter 16 on Bayesian methods of inference. It includes examples of calculating posterior distributions and Bayes estimators for binomial, beta-binomial, and gamma-Poisson models. For the gamma-Poisson model, it shows that the posterior distribution is also gamma and the Bayes estimator is a weighted average of the MLE and prior mean. It discusses how the Bayes estimator is asymptotically unbiased and consistent for the gamma-Poisson model.

Uploaded by

Jeans Carlos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

326

Chapter 16: Introduction to Bayesian Methods of Inference



16.1 Refer to Table 16.1.
a. (10, 30)
b. n = 25
c. (10, 30) , n = 25
d. Yes
e. Posterior for the (1, 3) prior.

16.2 a.-d. Refer to Section 16.2

16.3 a.-e. Applet exercise, so answers vary.

16.4 a.-d. Applex exercise, so answers vary.

16.5 It should take more trials with a beta(10, 30) prior.

16.6 Here,
y n y
p p
y
n
p y p p y L

= = ) 1 ( ) | ( ) | ( , where y = 0, 1, , n and 0 < p < 1. So,


1 1
) 1 (
) ( ) (
) (
) 1 ( ) , (

= p p p p
y
n
p y f
y n y

so that
) (
) ( ) (
) ( ) (
) (
) 1 (
) ( ) (
) (
) (
1
0
1 1
+ +
+ +

+
=

+

+ +
n
y n y
dp p p
y
n
y m
y n y
.
The posterior density of p is then
1 1 *
) 1 (
) ( ) (
) (
) | (
+ +

+ +
+ +
=
y n y
p p
y n y
n
y p g , 0 < p < 1.
This is the identical beta density as in Example 16.1 (recall that the sum of n i.i.d.
Bernoulli random variables is binomial with n trials and success probability p).

16.7 a. The Bayes estimator is the mean of the posterior distribution, so with a beta posterior
with = y + 1 and = n y + 3 in the prior, the posterior mean is
4
1
4 4
1

+
+
+
=
+
+
=
n n
Y
n
Y
p
B
.
b. p
n
np
n
Y E
p E
B

+
+
=
+
+
=
4
1
4
1 ) (
) ( ,
2 2
) 4 (
) 1 (
) 4 (
) (
) (
+

=
+
=
n
p np
n
Y V
p V

16.8 a. From Ex. 16.6, the Bayes estimator for p is
2
1
) | (
+
+
= =
n
Y
Y p E p
B
.
b. This is the uniform distribution in the interval (0, 1).

c. We know that n Y p / = is an unbiased estimator for p. However, for the Bayes
estimator,
Chapter 16: Introduction to Bayesian Methods of Inference 327
Instructors Solutions Manual

2
1
2
1 ) (
) (
+
+
=
+
+
=
n
np
n
Y E
p E
B
and
2 2
) 2 (
) 1 (
) 2 (
) (
) (
+

=
+
=
n
p np
n
Y V
p V
B
.
Thus,
2
2
2
2
2
) 2 (
) 2 1 ( ) 1 (
2
1
) 2 (
) 1 (
)] ( [ ) ( ) (
+
+
=


+
+
+
+

= + =
n
p p np
p
n
np
n
p np
p B p V p MSE
B B B
.
d. For the unbiased estimator p , MSE( p ) = V( p ) = p(1 p)/n. So, holding n fixed, we
must determine the values of p such that
n
p p
n
p p np ) 1 (
) 2 (
) 2 1 ( ) 1 (
2
2

<
+
+
.
The range of values of p where this is satisfied is solved in Ex. 8.17(c).


16.9 a. Here, p p p y p p y L
y 1
) 1 ( ) | ( ) | (

= = , where y = 1, 2, and 0 < p < 1. So,
1 1 1
) 1 (
) ( ) (
) (
) 1 ( ) , (


+
= p p p p p y f
y

so that
) (
) 1 ( ) 1 (
) ( ) (
) (
) 1 (
) ( ) (
) (
) (
1
0
2
+ +
+ +

+
=

+
=

+
y
y
dp p p y m
y
.
The posterior density of p is then
* 2
( )
( | ) (1 )
( 1) ( 1)
y
y
g p y p p
y



+
+ +
=
+ +
, 0 < p < 1.
This is a beta density with shape parameters
*
= + 1 and
*
= + y 1.


b. The Bayes estimators are
,
1
) | ( ) 1 (
Y
Y p E p
B
+ +
+
= =
,
) )( 1 (
) 1 )( 1 (
) )( 1 (
) 1 )( 2 ( 1
) | ( ) | ( )] 1 ( [ ) 2 (
2
Y Y
Y
Y Y Y
Y p E Y p E p p
B
+ + + + +
+ +
=
+ + + + +
+ +

+ +
+
= =


where the second expectation was solved using the result from Ex. 4.200. (Alternately,
the answer could be found by solving

=
1
0
*
) | ( ) 1 ( ] | ) 1 ( [ dp Y p g p p Y p p E .


16.10 a. The joint density of the random sample and is given by the product of the marginal
densities multiplied by the gamma prior:
328 Chapter 16: Introduction to Bayesian Methods of Inference
Instructors Solutions Manual

[ ]
( )

=


=

+
=
+

=
1
exp
) (
/ exp
) (
) / exp(
) (
1
) exp( ) , , , (
1
1
1
1
1
1
1
n
i
i
n
n
i
i
n
n
i
i n
y
y
y y y f

b.

=
+

+



=
0
1
1
1
1
exp
) (
1
) , , ( d
y
y y m
n
i
i
n
n
, but this integral resembles
that of a gamma density with shape parameter n + and scale parameter
1
1
+

=
n
i
i
y
.
Thus, the solution is
+
=

+

+

=

n
n
i
i
n
y
n y y m
1
) (
) (
1
) , , (
1
1
.

c. The solution follows from parts (a) and (b) above.


d. Using the result in Ex. 4.111,

( )
( ) ( ) 1
1
1 1
1
1
1
) 1 (
1
) | / 1 ( ) | (
1 1
1
1
* *
+
+
+
=
+
+
=

+
+

=

= = =


= =

=
n n
Y
n
Y
n
Y
E E
n
i
i
n
i
i
n
i
i
B
Y Y

e. The prior mean for 1/ is
) 1 (
1
) / 1 (

= E (again by Ex. 4.111). Thus,
B
can be
written as
( )

+


+

+
=
1
1
1
1
1

n n
n
Y
B
,
which is a weighted average of the MLE and the prior mean.


f. We know that Y is unbiased; thus E(Y ) = = 1/. Therefore,
( ) ( )

+


+

+
=

+


+

+
=
1
1
1
1
1
1
1
1
1
1
1
) ( ) (
n n
n
n n
n
Y E E
B
.
Therefore,
B
is biased. However, it is asymptotically unbiased since
0 / 1 ) (
B
E .
Also,
Chapter 16: Introduction to Bayesian Methods of Inference 329
Instructors Solutions Manual

( )
0
1
1
1
1
1
) ( ) (
2 2
2
2
2

+
=

+
=

+
=
n
n
n
n
n n
n
Y V V
B
.
So, / 1
p
B
and thus it is consistent.




16.11 a. The joint density of U and is



=


=


= =
+

1
exp
) ( !
) / exp(
) ( !
) / exp(
) (
1
!
) ( exp ) (
) ( ) | ( ) , (
1
1
1
n u
n
n
u
n
u
n n
g u p u f
u
u
u
u
u

b.

d
n u
n
u m
u
u
0
1
1
exp
) ( !
) ( , but this integral resembles that of a
gamma density with shape parameter u + and scale parameter
1 +

n
. Thus, the
solution is
+

+

=
u
u
n
u
u
n
u m
1
) (
) ( !
) ( .

c. The result follows from parts (a) and (b) above.

d.

+ = = =
1
) ( ) | (

* *
n
U U E
B
.

e. The prior mean for is E() = . From the above,
( )

+
+

+ =

=
1
1
1 1

1
n n
n
Y
n
Y
n
i
i B
,
which is a weighted average of the MLE and the prior mean.


f. We know that Y is unbiased; thus E(Y ) = Therefore,

+
+

+
+

=
1
1
1 1
1
1
) ( )

(
n n
n
n n
n
Y E E
B
.
So,
B

is biased but it is asymptotically unbiased since


)

(
B
E 0.
Also,
330 Chapter 16: Introduction to Bayesian Methods of Inference
Instructors Solutions Manual

( )
0
1 1 1
) ( )

(
2
2 2

+

=

+

=

=
n
n
n
n
n n
n
Y V V
B
.
So,
p
B

and thus it is consistent.





16.12 First, it is given that W = vU =

=

n
i
i
Y v
1
2
0
) ( is chisquare with n degrees of freedom.
Then, the density function for U (conditioned on v) is given by
( )
2 / 2 / 1 2 /
2 /
2 / 1 2 /
2 /
2 ) 2 / (
1
2 ) 2 / (
1
) ( ) | (
uv n n
n
uv n
n
W U
e v u
n
e uv
n
v uv f v v u f

= = .

a. The joint density of U and v is then
.
2
2
exp
2 ) ( ) 2 / (
1
) / 2 / exp(
2 ) ( ) 2 / (
1
) / exp(
) (
1
) 2 / exp(
2 ) 2 / (
1
) ( ) | ( ) , (
1 2 / 1 2 /
2 /
1 2 / 1 2 /
2 /
1 2 / 1 2 /
2 /


=


=


= =
+

u
v v u
n
v uv v u
n
v v uv v u
n
v g v u f v u f
n n
n
n n
n
n n
n
U

b. dv
u
v v u
n
u m
n n
n


=
0
1 2 / 1 2 /
2 /
2
2
exp
2 ) ( ) 2 / (
1
) ( , but this integral
resembles that of a gamma density with shape parameter n/2 + and scale parameter
2
2
+

u
. Thus, the solution is
+

+

=
2 /
2 /
1 2 /
2
2
) 2 / (
2 ) ( ) 2 / (
) (
n
n
n
u
n
n
u
u m .

c. The result follows from parts (a) and (b) above.


d. Using the result in Ex. 4.111(e),
( ) 2 2
2
2
2
1 2 /
1
) 1 (
1
) | / 1 ( ) | (
* *
2 2
+
+
=

+
+
=

= = =
n
U U
n
U v E U E
B
.

e. The prior mean for
) 1 (
1
/ 1
2

= = v . From the above,
( )

+


+

+
=
+
+
=
2 2
) 1 ( 2
) 1 (
1
2 2 2 2
2

2
n n
n
n
U
n
U
B
.


16.13 a. (.099, .710)
b. Both probabilities are .025.
Chapter 16: Introduction to Bayesian Methods of Inference 331
Instructors Solutions Manual

c. P(.099 < p < .710) = .95.
d.-g. Answers vary.
h. The credible intervals should decrease in width with larger sample sizes.


16.14 a.-b. Answers vary.

16.15 With y = 4, n = 25, and a beta(1, 3) prior, the posterior distribution for p is beta(5, 24).
Using R, the lower and upper endpoints of the 95% credible interval are given by:
> qbet a( . 025, 5, 24)
[ 1] 0. 06064291
> qbet a( . 975, 5, 24)
[ 1] 0. 3266527

16.16 With y = 4, n = 25, and a beta(1, 1) prior, the posterior distribution for p is beta(5, 22).
Using R, the lower and upper endpoints of the 95% credible interval are given by:
> qbet a( . 025, 5, 22)
[ 1] 0. 06554811
> qbet a( . 975, 5, 22)
[ 1] 0. 3486788

This is a wider interval than what was obtained in Ex. 16.15.


16.17 With y = 6 and a beta(10, 5) prior, the posterior distribution for p is beta(11, 10). Using
R, the lower and upper endpoints of the 80% credible interval for p are given by:
> qbet a( . 10, 11, 10)
[ 1] 0. 3847514
> qbet a( . 90, 11, 10)
[ 1] 0. 6618291

16.18 With n = 15,

=
n
i
i
y
1
= 30.27, and a gamma(2.3, 0.4) prior, the posterior distribution for
is gamma(17.3, .030516). Using R, the lower and upper endpoints of the 80% credible
interval for are given by
> qgamma( . 10, shape=17. 3, scal e=. 0305167)
[ 1] 0. 3731982
> qgamma( . 90, shape=17. 3, scal e=. 0305167)
[ 1] 0. 6957321

The 80% credible interval for is (.3732, .6957). To create a 80% credible interval for
1/, the end points of the previous interval can be inverted:

.3732 < < .6957
1/(.3732) > 1/ > 1/(.6957)

Since 1/(.6957) = 1.4374 and 1/(.3732) = 2.6795, the 80% credible interval for 1/ is
(1.4374, 2.6795).


332 Chapter 16: Introduction to Bayesian Methods of Inference
Instructors Solutions Manual

16.19 With n = 25,

=
n
i
i
y
1
= 174, and a gamma(2, 3) prior, the posterior distribution for is
gamma(176, .0394739). Using R, the lower and upper endpoints of the 95% credible
interval for are given by
> qgamma( . 025, shape=176, scal e=. 0394739)
[ 1] 5. 958895
> qgamma( . 975, shape=176, scal e=. 0394739)
[ 1] 8. 010663
16.20 With n = 8, u = .8579, and a gamma(5, 2) prior, the posterior distribution for v is
gamma(9, 1.0764842). Using R, the lower and upper endpoints of the 90% credible
interval for v are given by
> qgamma( . 05, shape=9, scal e=1. 0764842)
[ 1] 5. 054338
> qgamma( . 95, shape=9, scal e=1. 0764842)
[ 1] 15. 53867

The 90% credible interval for v is (5.054, 15.539). Similar to Ex. 16.18, the 90% credible
interval for
2
= 1/v is found by inverting the endpoints of the credible interval for v,
given by (.0644, .1979).

16.21 From Ex. 6.15, the posterior distribution of p is beta(5, 24). Now, we can find
) 3 . ( ) (
*
0
*
< = p P p P by (in R):
> pbet a( . 3, 5, 24)
[ 1] 0. 9525731

Therefore, ) 3 . ( ) (
* *
= p P p P
a
= 1 .9525731 = .0474269. Since the probability
associated with H
0
is much larger, our decision is to not reject H
0
.

16.22 From Ex. 6.16, the posterior distribution of p is beta(5, 22). We can find
) 3 . ( ) (
*
0
*
< = p P p P by (in R):
> pbet a( . 3, 5, 22)
[ 1] 0. 9266975

Therefore, ) 3 . ( ) (
* *
= p P p P
a
= 1 .9266975 = .0733025. Since the probability
associated with H
0
is much larger, our decision is to not reject H
0
.

16.23 From Ex. 6.17, the posterior distribution of p is beta(11, 10). Thus,
) 4 . ( ) (
*
0
*
< = p P p P is given by (in R):
> pbet a( . 4, 11, 10)
[ 1] 0. 1275212

Therefore, ) 4 . ( ) (
* *
= p P p P
a
= 1 .1275212 = .8724788. Since the probability
associated with H
a
is much larger, our decision is to reject H
0
.

16.24 From Ex. 16.18, the posterior distribution for is gamma(17.3, .0305). To test
H
0
: > .5 vs. H
a
: .5,
we calculate ) 5 . ( ) (
*
0
*
> = P P as:
Chapter 16: Introduction to Bayesian Methods of Inference 333
Instructors Solutions Manual

> 1 - pgamma( . 5, shape=17. 3, scal e=. 0305)
[ 1] 0. 5561767

Therefore, ) 5 . ( ) (
* *
= P P
a
= 1 .5561767 = .4438233. The probability
associated with H
0
is larger (but only marginally so), so our decision is to not reject H
0
.


16.25 From Ex. 16.19, the posterior distribution for is gamma(176, .0395). Thus,
) 6 ( ) (
*
0
*
> = P P is found by
> 1 - pgamma( 6, shape=176, scal e=. 0395)
[ 1] 0. 9700498

Therefore, ) 6 ( ) (
* *
= P P
a
= 1 .9700498 = .0299502. Since the probability
associated with H
0
is much larger, our decision is to not reject H
0
.

16.26 From Ex. 16.20, the posterior distribution for v is gamma(9, 1.0765). To test:
H
0
: v < 10 vs. H
a
: v 10,
we calculate ) 10 ( ) (
*
0
*
< = v P v P as
> pgamma( 10, 9, 1. 0765)
[ 1] 0. 7464786

Therefore, ) 10 ( ) (
* *
= v P P
a
= 1 .7464786 = .2535214. Since the probability
associated with H
0
is larger, our decision is to not reject H
0
.

You might also like