Simulating Maximum Likelihood Estimators: Corbin Miller Stat 342 February 14, 2011
Simulating Maximum Likelihood Estimators: Corbin Miller Stat 342 February 14, 2011
Corbin Miller
Stat 342
1
1 Uniform distribution: (0, θ), x ≤ θ
Maximum Likelihood Estimation produces a parameter estimator that makes the values of
the observed data most likely to have occurred. The likelihood function is the joint density
function of n random variables X1 , ..., Xn evaluated at x1 , ..., xn . First order conditions are
often used to see what parameter value maximizes the likelihood function. In cases that the
likelihood function is not differentiable there are other methods to find the maximum.
ters (0, θ) where x ≤ θ and is zero otherwise. To estimate the Maximum Likelihood Estimator
(MLE) of θ I first need the likelihood function. Since the simulated data is a random sample
each observation will be independent and thus the joint probability density function (pdf) is
1 1
the product of the individual pdfs. Each pdf will be θ
thus, the likelihood function will be θn
.
Since this function is decreasing in θ the smallest non-zero value of θ will be the MLE. Also,
if x θ then the likelihood is zero. Thus, the maximum of the likelihood function should
be when the θ is the value of the largest observed x, called the maximum order statistic
(denoted by xn:n ).
(a) N=10, θ̂ = 2.923 (b) N=30, θ̂ = 2.991 (c) N=100, θ̂ = 2.984 (d) N=500, θ̂ = 2.999
I then simulate data with θ = 3 and sample sizes 10, 30, 100, and 500. Each of the graphs
in Figure 1 corresponds to these data and show a right angle where θ = xn:n . It is clear to
2
Table 1: Comparing θ̂M LE with xn:n
see that the maximum likelihood will be when θ takes on the value of the maximum order
statistic. This is where the likelihood jumps up from zero to its maximum value. Table 1
reports the MLEs and maximum order statistics of the distributions. Each MLE produces
a close estimate of the true value of θ (within .1) and it is clear to see that when rounded to
three decimal places the MLE and maximum order statistic are equal. It is also interesting
to observe that the larger the sample size the more accurate the estimator and the more
The second example comes from a two-parameter exponential with (1, η) where x ≥ η. The
likelihood of this one will be bounded above instead of below. In this case the MLE will be
the minimum order statistic (x1:n ) as opposed to the maximum. The likelihood function for
this distribution will be e−(x̄−η) . This produces likelihood graphs that look like a horizontal
The simulated data is from a exponential with scale parameter 1 and location parameter
η = 5. The likelihood plots for three different sample sizes are given in Figure 2. A similar
right angle to the first example is seen, making it clear where the maximum is. The best
3
(a) N=10, η̂M LE = 5.124 (b) N=30, η̂M LE = 5.035 (c) N=100, η̂M LE = 5.022
estimate for η based our data is going to be the minimum order statistic. Shown in Table 2
are the exact estimates and it is clear to see that the MLE and order statistic are once again
equal. The estimates also get closer to the truth as the sample size gets larger. If the sample
size were to increase large enough then the estimates would become unbiased and would be
3 Uniform: (θ − 1, θ), θ − 1 ≤ x ≤ θ
The final example is an interesting case since the x’s are bounded above and below. Upon
doing the analysis in the same way as the previous two it appears that there are many MLEs.
Since the likelihood function is 1 at any x between θ − 1 and θ any estimated value of θ
4
between the lower and upper bounds will have a likelihood of 1. As n increases the bounds
of the estimates get closer together. Eventually the estimates will converge to the single
Figure 3: θ = 2
Even without reporting the MLEs it would be possible to compute the bounds of the
estimates from the minimum and maximum observed x’s. The lower bound comes from the
maximum order statistic and the upper bound is the minimum order statistic plus 1.
These results show that in some cases of bounded likelihood functions the MLE can be
determined directly from the order statistics. The great thing about this method is that
order statistics are one of the easiest statistics to calculate and exist in every distribution.