0% found this document useful (0 votes)
355 views

Tutorial Question 2

The document contains 4 questions regarding maximum likelihood estimation and properties of estimators. Question 1 asks to find the MLE of θ for a Bernoulli distribution based on sample data. Question 2 shows that the MLE of the variance in a linear regression model equals the residual sum of squares divided by n. Question 3 compares two estimators of a population mean μ, finding their biases, probability limits, and variances as the sample size increases. Question 4 provides a practical exercise using data to conduct likelihood ratio and Wald tests of hypotheses regarding coefficients in a linear model.

Uploaded by

noora1124
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
355 views

Tutorial Question 2

The document contains 4 questions regarding maximum likelihood estimation and properties of estimators. Question 1 asks to find the MLE of θ for a Bernoulli distribution based on sample data. Question 2 shows that the MLE of the variance in a linear regression model equals the residual sum of squares divided by n. Question 3 compares two estimators of a population mean μ, finding their biases, probability limits, and variances as the sample size increases. Question 4 provides a practical exercise using data to conduct likelihood ratio and Wald tests of hypotheses regarding coefficients in a linear model.

Uploaded by

noora1124
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 2

Applied Econometrics Techniques

153400119
Dr. Ben Groom

Problem Set 2: Large Sample Properties and


Maximum Likelihood Estimation
Q1. The variable Z represents equals 1 if an individual has been a victim of crime and 0
otherwise. It is believed 1) Z ~ Bernoulli(θ ) and; 2) the observations are independent. A
random survey obtains the following data: 1,1,1,0,1,0,0.

Given that the Likelihood of the Bernoulli distribution is:

L = Πθ x i (1 − θ )
1− xi

(which means for data (1,1,0) would yield: θ * θ * (1 −θ ) )

therefore the log likelihood is:

⇒ ln L = Σi xi ln θ + (n − Σi xi ) ln ( 1 − θ )

(in the case of data (1,1,0) the log likelihood =


ln θ + ln θ + ln ( 1 − θ ) = 2 ln θ + ( 3 − 2 ) ln ( 1 − θ ) = Σi xi ln θ + ( n − Σi xi ) ln ( 1 = θ ) )

What is the maximum likelihood estimate of θ ?

Q2. The log likelihood function for the linear model y i = α + βxi + u i where
ui ~ N ( 0, σ 2 ) is given by:

2
n n 1 u 
f ( x1 , x2 ,..., xn )
2 2
( )
= − ln ( 2π ) − ln σ 2 − Σi  i 
2 σ 

Show that the maximum likelihood estimator for the variance is equal to RSS/n, where
2
RSS = u i in the linear model.

Q3. Let Y denote the sample average from a random sample with mean µ and
variance σ 2 . Consider two alternative estimators of µ . W1 = [ ( n −1) / n ]Y and
W2 = Y / 2 .

i) show that W1 and W2 are both biased estimators of µ and find the
biases. What happens to the biases as n → ∞ ? Comment on any important
differences in bias for the two estimators as the sample size gets large.
ii) Find the probability limits of W1 and W2 .
iii) Find var (W1 ) and var (W2 ) .
iv) Compare W1 and W2
v) Argue that W1 is a better estimator than Y if µ is close to zero (Hint:
consider variance and bias)

Q.4. Practical Exercise.

Use the bwght dataset found on STATA.

i) The likelihood ratio test: use the LR test to test the null hypothesis that
β 2 = β 4 in the following model.

bwght = α + β1cigs + β2 male + β3 drink + β4 feduc + u i

ii) Using the Wald test, test the following null hypothesis for the model above.
H 0 : β2 = β4 , H 1 : β2 ≠ β4 .

You might also like