Final Exam Practice Problems
Final Exam Practice Problems
Question #1: If X 1 and X 2 denotes a random sample of size n=2 from the population with
1
density function f X ( x )= 2
1 {x >1 }, then a) find the joint density of U =X 1 X 2 and V = X 2, and
x
b) find the marginal probability density function of U .
a) By the independence of the two random variables X 1 and X 2 , we have that their joint
−2 −2 −2
density is given by f X 1 X2 ( x 1 , x 2 )=f X ( x 1 ) f X ( x 2 )=x 1 x 2 =( x 1 x2 ) whenever x 1> 0 and
1 2
u u
x 2> 0. Then u=x1 x 2 and v=x 2 implies that x 2=v and x 1= = , so we can compute
x2 v
[ ]
∂ x1 ∂ x1
[ ]
1 −u
∂u ∂ v =det 1
the Jacobian as J=det v = . This implies that the joint
2
v
∂ x2 ∂ x2 v
0 1
∂u ∂v
( ) ( )
−2
u | | u 1 1
density of U and V is f UV (u , v )=f X X ,v J = v = 2 whenever v> 1 and
1 2
v v v u v
u
>1→ u> v . This also shows U and V are independent.
v
u u
1 1 ln ( u )
b) We have f U ( u )=∫ f UV ( u , v ) dv =∫
u
2
dv= 2 [ ln (v ) ] 1= 2 if u>1.
1 1 u v u u
Question #2: Find the Maximum Likelihood Estimators (MLE) of θ and α based on a
α α −1
random sample X 1 , … , X n from f X ( x ; θ , α )= α
x 1 {0 ≤ x ≤ θ } where θ> 0 and α >0.
θ
(∏ )
n n n α−1
L ( θ , α )=∏ f X ( x i ; θ , α )=∏ α θ −α
x α−1
i
n −nα
1 {0≤ x i ≤ θ }=α θ xi 1 {0 ≤ x 1 : n }1 { x n : n ≤ θ }
i=1 i=1 i=1
n
∂ n n
ln [ L ( θ , α ) ] = −n ln (θ ) + ∑ ln ( x i )=0→ α =
is ∂ α α n
. Since the second
n ln ( θ )−∑ ln ( x i )
i=1
i=1
n
α^ MLE = n
derivative is zero, we have found that .
n ln ( X n: n )−∑ ln ( X i )
i=1
a) We have previously shown that the Normal distribution is a member of the Regular
n n
Exponential Class (REC), where S1=∑ X i and S2=∑ X i are jointly complete
2
i =1 i =1
sufficient statistics for the unknown parameters μ and σ 2. Since we have that
( )
n n n n
E ( S1 ) =E ∑ X i =∑ E( X ¿¿ i)=∑ μ=nμ ¿, the statistic T = 1n S1= 1n ∑ X i= X will be
i=1 i=1 i=1 i=1
(∑ ) ∑
n n n n
b) Since E ( S2 ) =E 2
X =
i E( X )=∑ Var ( X i ) + E ( X i ) =∑ σ 2 + μ2=n ( σ 2 + μ2 ),
2
i
2
the
i=1 i=1 i =1 i=1
1 2
statistic T = S2−μ will be unbiased for the unknown parameter σ 2 when μ is
n
known. The Lehmann-Scheffe Theorem states that T is a UMVUE for σ 2.
α α −1
Question #4: Consider a random sample X 1 , … , X n from f X ( x ; θ , α )= α
x 1 {0 ≤ x ≤ θ },
θ
where θ> 0 is unknown but α >0 is known. Find the constants 1<a< b depending on the
values of n and α such that (a X n :n , b X n : n) is a 90% confidence interval for θ .
We first compute the CDF of the population
[ ] ()
x x x α
α α 1 α x
F X ( x ; θ , α )=∫ f X (t ;θ , α )dt= α∫
α −1
x dt = α x = whenever 0 ≤ x ≤ θ. Then
0 θ 0 θ α 0 θ
we use this to compute the CDF of the largest order statistic
( ) . In order for (a X
nα
n x
F X ( x )=P ( X n :n ≤ x ) =P ( X i ≤ x ) = n :n , b X n : n) to be a 90%
n :n
θ
confidence interval for θ , it must be the case that P ( a X n: n ≥θ )=0.05 and
P ( b X n : n ≤θ )=0.05 . We solve each of these two equations for the unknown constants,
so
( ) ( ) () ( ) =0.95
nα
θ θ θ θ/a
P ( a X n: n ≥θ )=0.05 → P X n : n ≥ =0.05 →1−P X n: n ≤ =0.95 → 1−F X =0.95 →1−
a a n: n
a θ
. Similarly, we find that b=( 20 )1 /nα .
Question #5: If X 1 and X 2 are independent and identically distributed from exp (θ) such
1
−x X2
that their density is f X ( x )= e θ
1 {x> 0 }, then find the joint density of U =X 1 and V = .
θ X1
| [ ]| [ ][ ]
−u −uv −u (1− v)
1 1
We have f UV (u , v )=f X X2 ( u , uv ) det 1 0 = e θ
e θ
[ u ] = u2 e θ
whenever
1
v u θ θ θ
we have that u>0 and uv >0 → v >0 .
Question #6: Let X 1 , … , X n be a random sample from WEI (θ , β) with known β such that
−( )
β
x
β
their common density is f X ( x ; θ , β )= β x β −1 e θ 1 {x >0 }. Find the MLE of θ .
θ
β β −1 −( θ )
β
(∏ )
n
n xi n β−1 −1
∑ x iβ
L ( θ , β )=∏
n
We have x e
β i
=( β θ− β ) xi e θβ i=1
so that
i=1 θ i=1
n n
1
ln [ L ( θ , β ) ] =n ln ( β )−nβ ln ( θ ) + ( β−1 ) ∑ ln ( x i )− β∑ i
x β. Then we can calculate that
i=1 θ i=1
n
the ∂ −nβ β nβ
n
β 1
n n ∑ x iβ .
ln [ L ( θ , β ) ] = + β +1 ∑ x iβ=0 → = β+ 1 ∑ xiβ → n= β ∑ x iβ →θ β = i=1
∂θ θ θ i=1 θ θ i=1 θ i=1 n
( )
n 1
1
Therefore, we have found θ^ MLE = ∑ X i .
β β
n i=1
Question #7: Use the Cramer-Rao Lower Bound to show that X is a UMVUE for θ based on
−x
1
a random sample of size n from an exp (θ) distribution where f X ( x ; θ )= e θ
1{ x> 0 }.
θ
2
We first find the Cramer-Rao Lower Bound; since τ ( θ )=θ, we have [ τ ' (θ) ] =1 . Then
X ∂ −1 X
we have ln f ( X ;θ )=−ln ( θ )− so that ln f ( X ;θ )= + and
θ ∂θ θ θ2
[ ]
2 2
∂ 1 2X ∂ −1 2 2θ 1 1
2
ln f ( X ; θ )= 2 − 3 . Finally, −E 2
ln f ( X ; θ ) = 2 + 3 E ( X )= 3 − 2 = 2
∂θ θ θ ∂θ θ θ θ θ θ
2
θ
implies that CRLB= . To verify that X is a UMVUE for θ, we first show that
n
( )
n n
1 1 1
E ( X )=E ∑
n i=1
X i = ∑ E (X ¿¿ i)= nθ=θ ¿. Then we check that the variance
n i=1 n
achieves the Cramer-Rao Lower Bound from above, so
( )
n n
1 1 1 θ2
Var ( X )=Var ∑ X = ∑
n i=1 i n2 i =1
Var ( X ¿¿ i)=
n
2
n θ 2
=
n
=CRLB ¿. Thus, X is a UMVUE
for θ.
Question #8: Use the Lehmann-Scheffe Theorem to show that X is a UMVUE for θ based on
−x
1
a random sample of size n from an exp (θ) distribution where f X ( x ; θ )= e θ
1{ x> 0 }.
θ
We know that exp (θ) is a member of the Regular Exponential Class (REC) with
n n
t 1 ( x )=x , so the statistic S=∑ t 1 ( X i)=∑ X i is complete sufficient for the parameter
i=1 i=1
( )
n n
θ. Then E ( S )=E ∑ X i =∑ E(X i )=nθ implies that the estimator
i=1 i=1
n
1 1
T = S= ∑ X i =X is unbiased for θ , so the Lehmann-Scheffe Theorem guarantees
n n i=1
that it will be the Uniform Minimum Variance Unbiased Estimator of θ .
Question #9: Find a 100 ( 1−α ) % confidence interval for θ based on a random sample of
−x
1
size n from an exp (θ) distribution where f X ( x )= e θ
1 {x> 0 }. Use the facts that
θ
exp ( θ ) ≡GAMMA (θ , 1), that θ is a scale parameter, and that GAMMA ( 2 , n ) ≡ χ 2 (2 n).
n
2
pivotal quantity, we use the modified pivot R=2nQ= ∑ X . To verify its
θ i=1 i
distribution, we use the CDF technique so
( ) (∑ ) ( )
n n
2 θr θr
F R ( r )=P ( R ≤ r )=P ∑ X ≤r =P
θ i=1 i
Xi ≤
2
=F A
2
implies that
i=1
( θr2 )
−
−r
( ) ( ) ( )
n−1
d θr θr θ 1 θr θ 1
f R (r )= FA =f A = n e θ
= n r n−1 e 2 . This proves
dr 2 2 2 θ Γ (n ) 2 2 2 Γ (n)
n
2
that R= ∑ X = 2 nθ X χ 2(2n)
θ i=1 i
so
[ ] [ ]
2 2n X 2 2n X 2n X
P χ α ( 2n )< < χ α (2 n ) =1−α → P 2 <θ< 2 =1−α
2
θ 1−
2 χ α (2 n) χ α ( 2n ) is the desired
1−
2 2
We have that
n
F Y ( y )=P ( Y n ≤ y )=P ( X 1 : n +ln ( n ) ≤ y )=P ( X 1: n ≤ y −ln ( n ) ) =1−P ( X 1 : n > y−ln ( n ) )=1−P ( X i > y−ln ( n ) ) =
n
( ) =1−e
y n
e e
y
( ) =e
nb
c cb
lim 1+ for all real numbers c and b .
n→∞ n
1
From the given density, we know that X UNIF (−1 , 1) so E ( X )=0 and Var ( X )= .
3
(∑ ) ∑
100 100 100
Then if we define Y =∑ X i, we have that E ( Y )=E Xi = E(X i )=0 and
i=1 i=1 i=1
( )
100 100
Var ( Y )=Var ∑ X i =∑ Var ( X i)=100 /3. These facts allow us to compute
i=1 i=1
( ) (√ )
100
Y −0 0−0
P ∑ X i ≤ 0 =P ( Y ≤ 0 )=P ≤
100/3 √ 100 /3
≈ P ( Z ≤ 0 ) =Φ ( 0 )=1/2.
i=1
1 −|x|
Question #12: Suppose that X is a random variable with density f X ( x )= e for x ∈ R.
2
Compute the Moment Generating Function (MGF) of X and use it to find E( X 2) .
By definition, we have
∞ ∞ ∞ 0 ∞ 0
1 −|x| 1 1 1 1
M X ( t )=E ( e ) =∫ e f X (x )dx=∫ e e dx= ∫ e e dx + ∫ e e dx = ∫ e dx + ∫ e
tX tx tx tx − x tx x x(t−1) x(t +1)
dx
−∞ −∞ 2 20 2 −∞ 20 2 −∞
1
. After integrating and collecting like terms, we obtain M X ( t )=
2
[ ( t+1 )−1−( t−1 )−1 ].
[ ]
2
d 1 d 1 2 2
M X ( t ) = [−( t+1 ) + ( t−1 ) ] and M X ( t )=
−2 −2
We then compute 2
−
dt 2 dt 2 ( t+1 ) ( t−1 )3
3
so that E ( X ) =
2 d2
dt 2
M X ( 0) =
1 2
[ 3
−
2
2 ( 0+ 1 ) ( 0−1 ) 3
1
]
= ( 2+2 )=2.
2
1
Question #13: Let X be a random variable with density f X ( x )= 1{−2< x< 2}. Compute the
4
probability density function of the transformed random variable Y = X 3.
Question #14: Let X 1 , … , X n be a random sample from the density f X ( x )=θ x θ−1 whenever
0< x <1 and zero otherwise. Find a complete sufficient statistic for the parameter θ .
We begin by verifying that the density is a member of the Regular Exponential Class
(REC) by showing that
f X ( x )=exp { ln ( θ x ) }=exp {ln ( θ ) + ( θ−1 ) ln(x)}=θ exp { (θ−1 ) ln(x )}, where q 1 ( θ )=θ−1
θ −1
n n
and t 1 ( x )=ln(x). Then we know that the statistic S=∑ t 1 ( X i)=∑ ln( X i) is complete
i=1 i=1
Question #15: Let X 1 , … , X n be a random sample from the density f X ( x )=θ x θ−1 whenever
0< x <1 and zero otherwise. Find the Maximum Likelihood Estimator (MLE) for θ> 0.
(∏ )
n n n θ −1
n
∂ n −n
n
ln [ L ( θ ) ] = + ∑ ln (xi )=0 → θ=
ln [ L ( θ ) ] =n ln (θ)+(θ−1) ∑ ln (x i) and ∂θ θ i=1 n
.
i=1 ∑ ln( x i)
i=1
−n
θ^ MLE = n
Thus, we have found that the Maximum Likelihood Estimator is
∑ ln(X i) .
i=1
Question #16: Let X 1 , … , X n be a random sample from the density f X ( x )=θ x θ−1 whenever
0< x <1 and zero otherwise. Find the Method of Moments Estimator (MME) for θ> 0.
1 1 1
θ
[ x θ+1 ]0= θ ,
1
E ( X )=∫ x f X ( x ) dx=∫ xθ x dx=θ ∫ x dx=
θ −1 θ
We have so that
0 0 0 θ+1 θ+1
θ X
=X → θ (1−X ) =X so the desired estimator is θ^ MME=
' '
μ1=M 1 → .
θ+1 1− X