0% found this document useful (0 votes)
14 views3 pages

Final Review

The document contains final review questions for an economics course, focusing on various statistical concepts such as probability distributions, maximum likelihood estimation, and hypothesis testing. It includes problems related to uniform and binomial distributions, joint random variables, and the properties of estimators. The questions require derivation of functions, calculation of expectations, and testing of statistical hypotheses.

Uploaded by

Alexy Flemmings
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views3 pages

Final Review

The document contains final review questions for an economics course, focusing on various statistical concepts such as probability distributions, maximum likelihood estimation, and hypothesis testing. It includes problems related to uniform and binomial distributions, joint random variables, and the properties of estimators. The questions require derivation of functions, calculation of expectations, and testing of statistical hypotheses.

Uploaded by

Alexy Flemmings
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Econ 520, Final Review Questions

Note: Many of these questions are drawn from previous years’ finals. I will not provide solutions to
these questions. The final will cover material from the entire semester, but with more weight on the
second half of the course.

1. Suppose that X1 is uniformly distributed on (0, 1) and X2 is uniformly distributed on (0, 2),
and X1 and X2 are independent. Let Y ≡ max(X1 , X2 ).

(a) Derive the CDF and PDF of Y .


(b) Calculate E[Y ].
(c) Suppose that instead of being independent, X1 and X2 are related in the following way:
X1 is uniform (0, 1), and X2 = 2 · X1 . Let Y be defined as before. Now calculate the
CDF and PDF of Y .

2. Let X and Y be joint random variables with joint PMF





 .3 if (x, y) = (3, 0) or (0, 6)

 .2 if (x, y) = (6, 6)
fXY (x, y) =


 .1 if (x, y) = (3, 6) or (6, 0)

 0 otherwise

and let U ∼ U nif [0, 1] where U is independent of (X, Y ).

(a) Calculate Pr(.75 ≤ U ≤ 1.5).


U p
(b) Let Tn = 1 + n. Then Tn −→ α. Obtain the numerical value of α. Use the definition
of convergence in probability to prove your answer.
(c) What is the PMF for X?
(d) What is Cov(X, Y )?

3. Suppose Xn ∼ Bin(n, αn ), where α > 0.


d
(a) Xn → Y . What’s the probability mass function for Y ?
Xn p
(b) Does √
n
→ 0? Prove your answer.

4. Suppose that Y1 , . . . , Yn are IID with a discrete distribution with PMF


(
1
K for y = 1, 2, . . . , K
fY (y; K) =
0 otherwise.

The parameter K is an integer ≥ 1.

1
(a) Find the maximum likelihood estimator for the parameter K.
(b) Show that if K0 is the true value of the parameter, then the MLE K̂ has the following
properties:
P r(K̂ = 1) > 0;

P r(K̂ > K0 ) = 0.

(c) Use your result from (b) to show that the MLE K̂ is biased towards 0.

5. Let X be a random variable with probability density function

1
fX (x; µ) = exp(−x/µ),
µ

for x > 0 and zero elsewhere.

(a) Calculate the mean and variance of X.


(b) Calculate the mean and variance of X conditional on X < 8.
P
(c) Let x1 , x2 , . . . , xN be a random sample from this distribution, with N = 20, x = 95,
and x2 = 590. Calculate the maximum likelihood estimate.
P

(d) Test the hypothesis that µ = 4 at the 10% level using a likelihood ratio test.
(e) Test the same hypothesis using a Lagrange multiplier (score) test.

6. Let the marginal distribution of X be binomial with N = 1 and p = 1/4. Conditional on X,


the random variable Y has a normal distribution with mean µ · (X + 1) and variance 1.

(a) Find the marginal density of Y .


(b) Suppose you have a random sample of size N from this joint distribution. What is the
maximum likelihood estimator and its large sample variance?
(c) Suppose you only observe y1 , . . . , yN . Find an unbiased estimator for µ. What is its
large sample variance and how does that compare to that of the maximum likelihood
estimator derived before?

7. Suppose there is a random sample of size 10 (X1 , . . . , X10 ) from a Poisson distribution (so
θxi e−θ
fXi (xi |θ) = xi ! , when xi is a nonnegative integer). The sample mean of the random
sample is 4.

(a) Using the sample, what is the maximum likelihood estimate for θ?
(b) Provide an approximate 95% confidence interval for θ.
(c) Using a large-sample LR test, test the hypothesis that θ = 3 at the 0.05 level. (Note that
if Z is a chi-squared random variable with 1 degree of freedom, P r(Z > 3.84) = 0.05.)

2
(d) Now suppose that the prior distribution for θ is Gamma(3, 5):

θ ∼ Gamma(3, 5).

What is the posterior distribution for θ given X1 , . . . , X10 ? (Hint: Z ∼Gamma(α, β)


z α−1 e−z/β
means fZ (z|α, β) = Γ(α)β α )

8. A firm samples machine parts until it finds a defective part; let Xi be the number of samples
until a defective part. Assume that each sample is independent with probability p of being
defective.

(a) Derive the probability mass function of Xi .


(b) Suppose that the firm obtains independent observations X1 , . . . , Xn . The firm has a
Beta(1,1) prior distribution for p. What is the posterior distribution for p?
Note: the Beta distribution with parameters α and β has PDF

Γ(α + β) α−1
f (x) = ·x (1 − x)β−1 .
Γ(α)Γ(β)

9. Suppose we have a random sample of individuals from a population, and assume that earnings
in a given month for the ith sampled individual is Xi , where Xi has PDF

(log x − θ)2
 
1
f (x; θ) = √ exp − , x > 0,
x 2π 2

where the parameter θ ∈ R.

(a) Is the statistical model an exponential family?


(b) Derive the MLE for θ based on a sample of size n. Is the MLE a minimum variance
unbiased estimator?
(c) Construct a large sample Wald test for the hypothesis that θ = θ0 . (Hint: if possible,
show that the single-observation score function has variance 1. If you cannot show this,
you can just take as given that its variance is 1.)
P P
(d) Suppose in our data set we observe n = 100, i Xi = 538, i log Xi = 127, and
P 2
i Xi = 5628. Calculate the MLE and provide a large sample 95% confidence interval
for θ.

You might also like