Practice Session 3 With Answers
Practice Session 3 With Answers
session 3
Solution
By Lehmann theorem (theorem 4.2), there exists a UMP test for one-sided problems provided
that pθ has the monotone likelihood ratio property (MLRP). Consider θ0 < θ00 , the likelihood
ratio is given by
Qn
pθ00 (x) exp[θ00 − Xi ]1[Xi ≥ θ00 ]
= Qi=1
n
i=1 exp[θ − Xi ]1[Xi ≥ θ ]
pθ0 (x) 0 0
1[X(1) ≥ θ 00 ]
= exp[n(θ00 − θ0 )]
1[X(1) ≥ θ 0 ]
Now, note that exp[n(θ00 − θ0 )] > 0. The following gure helps us conclude that pθ has the
MLRP (non-decreasing) with respect to X(1) .
pθ00 (x)
pθ0 (x)
0
exp[n(θ00 − θ0 )] 0
X(1)
θ0 θ00
Now, given the direction of our test (H1 : θ ≤ θ0 ), Lehmann theorem tell us that the UMP test
is of the form W = {X(1) < k0 }, where k0 is such that Pθ0 (X(1) < k0 ) = α. Using the properties
of the maximum, we have that Pθ0 (X(1) < k0 ) = 1 − exp[n(θ0 − k0 )]. Hence, k0 is obtained as
log(1 − α)
k0 = θ0 −
n
you are not supposed to derive the precise critical values of the test. However, try to simplify
the most the conditions that the critical values must satisfy )
Solution
By Lehmann theorem (theorem 4.2), there exists a UMP test for one-sided problems provided
that pθ has the monotone likelihood ratio property (MLRP). In exponential family, where
pθ (x) = C(θ)h(x) exp(Q(θ)T (x)), this boils down to checking whether Q(θ) is strictly monotonic
in θ. If it happens, pθ has the MLRP with respect to T (X). Here, we have that Q(λ) = − λ1
strictly increasing and T (X) = i=1 Xi
Pn
Using Lehmann theorem, we have that W = {T (X) > c} with c : Pλ0 (T (X) > c) = α is an α-
level UMP test. Let us work out the probability we just dened. Noting that T (X) ∼0 Ga(n, λ0 ),
H
we take c = q1−α where q1−α is the 1 − α quantile of a Ga(n, λ0 ). In conclusion, our α-level
UMP test is given by
( n )
X
W = Xi > q1−α
i=1
We can undertake the same reasoning as in subquestion a), except that we now reject H0
provided that T (X) is suciently small. Formally, the rejection region is provided by W =
{T (X) < c}, with c : Pλ0 (T (X) < c) = α. Hence, c = qα , where qα is the α-quantile of a
Ga(n, λ0 ). In conclusion, our α-level UMP test is given by
( n )
X
W = Xi < qα
i=1
Using theorem 4.5 and remark 4.2, our UMPU test is of the form W (X) = {T (X) < c1 or T (X) >
c2 } with Pλ0 (W (X)) = α and Eλ0 [W (X)T (X)] = αEλ0 [T (X)]. In our case, we need
n
! n
!
X X
P Xi < c1 +P Xi > c2 =α
i=1 i=1
" n " n
## " n
" n
##
X X X X
E Xi 1 Xi < c1 +E Xi 1 Xi > c2 = nλ0 α
i=1 i=1 i=1 i=1
Hint : The waiting time between two events of this Poisson process is Exp(λ)-distributed.
Hence, you can rephrase N (t) = m as a sample of waiting times.
Solution
We know that the waiting time between two events coming from a Poisson process behaves as
an Exp(λ) variable. Since we know that m events have occured, we are facing a sample of m
waiting times T1 , . . . , Tm ∼ Exp(λ).
iid
In order to nd a pivot, we can exploit the fact that if X ∼ Ga(k, θ), then 2θX ∼ χ2k . As such,
we have
m
!
X
1−α=P 0< Ti < q1−α
i=1
m
!
X
=P 0 < 2λ Ti < 2λq1−α
i=1
m
!
X
=P 0 < 2λ Ti < χ2m;1−α
i=1
χ2m;1−α
=P 0 < λ < Pm
2 i=1 Ti
where χ2m;1−α is the 1 − α quantile of a χ2m . Hence, a 1 − α UMA condence region for λ is
given by
χ2m;1−α
0, Pm
2 i=1 Ti
Let Y ∈ {0, 1} be a binary outcome (e.g. healthy or ill). Based on a set of continuous decision
variables X1 , . . . , Xp , we want to develop a rule for predicting whether Y = 0 or Y = 1.
Call d0 the decision to predict Y = 0 and d1 the decision to predict Y = 1, we dene the
probability of false positive as
F P P (d1 ) = P (Y = 0|d1 ).
Similarly, we dene the probability of true positive as
T P P (d1 ) = P (Y = 1|d1 ).
We dene an optimal decision rule as the output of the following procedure.
Solution
We can visualize our problem as a test where H0 = {Y = 0} and H1 = {Y = 1}. By Neyman-
Pearson theorem (theorem 4.1), the optimal test is such that H0 is rejected (i.e. we predict
Y = 1) if the following is met
pH1 (x1 , . . . , xp )
>c
pH0 (x1 , . . . , xp )
Besides, we have
pH1 (x1 , . . . , xp ) p(x1 , . . . , xp |Y = 1)
=
pH0 (x1 , . . . , xp ) p(x1 , . . . , xp |Y = 0)
P (Y = 1|x1 , . . . , xp )P (Y = 0)
=
P (Y = 0|x1 , . . . , xp )P (Y = 1)
H(xT β)P (Y = 0)
=
[1 − H(xT β)]P (Y = 1)