2024-11-22 Slides 10
2024-11-22 Slides 10
H0 : F ∈ F0 vs H1 : F ∈ F1 ,
Note: If cα,2 = ∞ (cα,1 = ∞), the test is a left-tailed (right-tailed) test, and if
−∞ < cα,1 , cα,2 < ∞, the test is a two-tailed test (i.e. the critical region lies in
the respective tail(s) of FTn ).
© Marius Hofert Section 8.3 | p. 223
Example 8.1 (Identifying φ, π, α, β)
Let θ ∈ (0, 1) be the probability of recovery after treatment with a new medication
ind.
and X1 , . . . , Xn ∼ B(1, θ) the recovery indicators of n patients. The manufacturer
of the medication tests H0 : θ = θ0 vs H1 : θ = θ1 for some 0 < θ0 < θ1 < 1 with
P
test statistic Tn = Tn (X) = ni=1 Xi and critical region C = {x : Tn (x) > c} for
some critical value c ∈ {1, . . . , n}. Identify the test’s φ, π, α and β.
Solution. φ(x) = 1{x∈C} = 1{Tn (x)>c} , π(θ) = P(φ(X) = 1) = P(X ∈
C) = P(Tn (X) > c) = F̄B(n,θ) (c), α = supθ∈Θ0 π(θ) = π(θ)|θ=θ0 = π(θ0 ) =
F̄B(n,θ0 ) (c) and β = 1 − π(θ)|θ=θ1 = FB(n,θ1 ) (c).
α fixed, θ runs: If θ is large, then Tn is expected to be large (since E(Tn (X)) =
nθ), so under H1 we need to reject H0 more often ⇒ C = {x : Tn (x) > c}
makes sense here.
θ fixed, α runs: A larger α should lead to a smaller c (higher rejection probability
under H0 ), so a larger C and thus a larger φ. In short, 0 < α1 < α2 < 1 ⇒
cα1 > cα2 ⇒ Cα1 ⊆ Cα2 ⇒ 0 ≤ φα1 ≤ φα2 ≤ 1.
For the test to keep its level α, i.e. for supF ∈F0 π(F ) here
= π(θ0 ) ≤ α, c cannot
be too small. So c = cα , thus C = Cα and φ = φα all depend on α.
© Marius Hofert Section 8.3 | p. 224
8.4 The notion of p-value
Typically, φα is monotone in α, so φα1 ≤ φα2 ∀ 0 < α1 < α2 < 1 (the smaller
the rejection prob. α, the less often H0 is rejected). Instead of reporting the test
decision φα (x), one can then report the smallest level at which H0 is still rejected.
For given α, reject H0 iff p(x) ≤ α. This gives a test’s decision for all α and is
thus frequently reported as test result (not just whether H0 was rejected).
Solution.
In this case Θ = {λ : λ > 0} = Θ0 ⊎ Θ1 for Θ0 = {λ0 } and Θ1 = {λ : λ ∈
(0, ∞)\{λ0 }}.
√
Under H0 , Tn = n X̄√n −λ0 n →
d
N(0, 1) (by the CLT and Slutsky’s theorem).
X̄n→∞
For sufficiently large n, the critical region is thus Cα = {x : |Tn (x)| > z1− α2 =
Φ−1 (1 − α2 )}.
√ n −λ0
Since var(X1 ) = λ H= λ0 , we could have also considered Tn = n X̄√ λ
.
0 0
The pivots in R. 8.7 and 8.8 were the same as in R. 6.32 and 6.33.
As we can see from these remarks, the (1 − α)-CIs are the complement of the
critical regions of the respective two-tailed tests.
Solution.
1) Let φNP
α be a test with critical region satisfying (ii)–(iii) of the NPL, i.e.
Cα = {x : fX (x; µ1 ) > ηfX (x; µ0 )} = {x : L(µ 0 ;x) 1
L(µ1 ;x) < η }. With L(µ; x) =
Qn 1
Pn xi −µ 2
xi −µ
1
i=1 σ φ( σ ) = 1
(2πσ 2 )n/2
e− 2 i=1
( σ
)
, we must have
L(µ0 ; x) 1
Pn 2 2 multiply
= e− 2σ2 i=1 ((xi −µ0 ) −(xi −µ1 ) ) = ...
L(µ1 ; x) out
1 ! 1
= e− 2σ2 (2nx̄n (µ1 −µ0 )−n(µ1 −µ0 )) < ,
2 2
η
2σ 2 log(η)+n(µ2 −µ2 ) !
which happens iff x̄n µ > 1
2n(µ1 −µ0 )
0
, so Cα = {x : x̄n > cα }.
0 < µ1
H0 : µ = µ0 vs H1 : µ ̸= µ0 .
Solution.
Pn
The log-likelihood is ℓ(µ; x) E.=
7.12
− n2 log(2πσ 2 ) − 1
2σ 2
2
i=1 (xi − µ) .
P
So the restricted log-likelihood is ℓ(µ0 ; x) = − n2 log(2πσ 2 )− 2σ1 2 ni=1 (xi −µ0 )2 .
The unrestricted MLE is µ̂n E.= 7.12
X̄n with log-likelihood ℓ(X̄n ; X) = − n2 log(2πσ 2 )
P
− 2σ1 2 ni=1 (Xi − X̄n )2 .
P
Therefore, Tn (x) = −2(ℓ(µ0 )−ℓ(µ̂n )) = σ12 ni=1 ((Xi −µ0 )2 −(Xi −X̄n )2 ) =
multiply
out
√
. . . = σn2 (X̄n − µ0 )2 = ( n X̄nσ−µ0 )2 .
χ21−0 , so Cα = {x : Tn (x) > Fχ−1
approx.
Under H0 , Tn n∼ large
2 (1 − α)}.
1
√
Alternatively, we know α = PH0 (Tn (X) > cα ) = PH0 (( n X̄nσ−µ0 )2 > cα ) =
!
√
PH0 (| n X̄nσ−µ0 | > c̃α ) Z ∼ =N(0, 1)
PH0 (|Z| > c̃α ), from which we obtain that
√
c̃α = z1−α/2 and thus the equivalent critical region Cα = {x : | n x̄n −µ
! 0
σ | >
z1−α/2 }.
© Marius Hofert Section 8.7 | p. 240
Example 8.14 (Two-tailed LRT for Exp(λ))
ind.
Let X1 , . . . , Xn ∼ Exp(λ), λ > 0. Find the LRT of size α for testing H0 : λ = λ0
vs H1 : λ = ̸ λ0 . Apply it to test λ0 = 1 at significance level 5% based on n = 100
observed losses with sum 125.
Solution.
Based on observations x = (x1 , . . . , xn ), the likelihood is L(λ; x) = (λe−λx̄n )n ,
λ > 0, with log-likelihood ℓ(λ; x) = n(log(λ) − λx̄n ), λ > 0. With ℓ′ (λ; x) =
n( λ1 − x̄n ) and ℓ′′ (λ; x) = − λn2 , we see that the MLE is 1/X̄n .
The LRT statistic is therefore
Tn = −2(ℓ( λ0 ) − ℓ(1/X̄n )) = −2n(log(λ0 ) − λ0 X̄n − log(1/X̄n ) + 1)
|{z} | {z }
under H0 MLE
= −2n(log(λ0 X̄n ) − λ0 X̄n + 1).