Tutorial Question 2
Tutorial Question 2
153400119
Dr. Ben Groom
L = Πθ x i (1 − θ )
1− xi
⇒ ln L = Σi xi ln θ + (n − Σi xi ) ln ( 1 − θ )
Q2. The log likelihood function for the linear model y i = α + βxi + u i where
ui ~ N ( 0, σ 2 ) is given by:
2
n n 1 u
f ( x1 , x2 ,..., xn )
2 2
( )
= − ln ( 2π ) − ln σ 2 − Σi i
2 σ
Show that the maximum likelihood estimator for the variance is equal to RSS/n, where
2
RSS = u i in the linear model.
Q3. Let Y denote the sample average from a random sample with mean µ and
variance σ 2 . Consider two alternative estimators of µ . W1 = [ ( n −1) / n ]Y and
W2 = Y / 2 .
i) show that W1 and W2 are both biased estimators of µ and find the
biases. What happens to the biases as n → ∞ ? Comment on any important
differences in bias for the two estimators as the sample size gets large.
ii) Find the probability limits of W1 and W2 .
iii) Find var (W1 ) and var (W2 ) .
iv) Compare W1 and W2
v) Argue that W1 is a better estimator than Y if µ is close to zero (Hint:
consider variance and bias)
i) The likelihood ratio test: use the LR test to test the null hypothesis that
β 2 = β 4 in the following model.
ii) Using the Wald test, test the following null hypothesis for the model above.
H 0 : β2 = β4 , H 1 : β2 ≠ β4 .