0% found this document useful (0 votes)
50 views5 pages

Simulating Maximum Likelihood Estimators: Corbin Miller Stat 342 February 14, 2011

The document simulates maximum likelihood estimators for three different distributions: 1) A uniform distribution from (0, θ) where the MLE is the largest observed value xn:n. 2) A two-parameter exponential distribution from (1, η) where the MLE is the smallest observed value x1:n. 3) A uniform distribution from (θ - 1, θ) where the MLE can be any value between the minimum and maximum observed values x1:n and xn:n respectively. As sample size increases, the possible MLE values converge to the true parameter value.

Uploaded by

corbinlm
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views5 pages

Simulating Maximum Likelihood Estimators: Corbin Miller Stat 342 February 14, 2011

The document simulates maximum likelihood estimators for three different distributions: 1) A uniform distribution from (0, θ) where the MLE is the largest observed value xn:n. 2) A two-parameter exponential distribution from (1, η) where the MLE is the smallest observed value x1:n. 3) A uniform distribution from (θ - 1, θ) where the MLE can be any value between the minimum and maximum observed values x1:n and xn:n respectively. As sample size increases, the possible MLE values converge to the true parameter value.

Uploaded by

corbinlm
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Simulating Maximum Likelihood Estimators

Corbin Miller

Stat 342

February 14, 2011

1
1 Uniform distribution: (0, θ), x ≤ θ

Maximum Likelihood Estimation produces a parameter estimator that makes the values of

the observed data most likely to have occurred. The likelihood function is the joint density

function of n random variables X1 , ..., Xn evaluated at x1 , ..., xn . First order conditions are

often used to see what parameter value maximizes the likelihood function. In cases that the

likelihood function is not differentiable there are other methods to find the maximum.

The first example of a non-differentiable function is a uniform distribution with parame-

ters (0, θ) where x ≤ θ and is zero otherwise. To estimate the Maximum Likelihood Estimator

(MLE) of θ I first need the likelihood function. Since the simulated data is a random sample

each observation will be independent and thus the joint probability density function (pdf) is

1 1
the product of the individual pdfs. Each pdf will be θ
thus, the likelihood function will be θn
.

Since this function is decreasing in θ the smallest non-zero value of θ will be the MLE. Also,

if x  θ then the likelihood is zero. Thus, the maximum of the likelihood function should

be when the θ is the value of the largest observed x, called the maximum order statistic

(denoted by xn:n ).

(a) N=10, θ̂ = 2.923 (b) N=30, θ̂ = 2.991 (c) N=100, θ̂ = 2.984 (d) N=500, θ̂ = 2.999

Figure 1: Likelihood from uniform distributions with parameters (0, θ) where θ = 3

I then simulate data with θ = 3 and sample sizes 10, 30, 100, and 500. Each of the graphs

in Figure 1 corresponds to these data and show a right angle where θ = xn:n . It is clear to

2
Table 1: Comparing θ̂M LE with xn:n

(0, θ) θ̂M LE xn:n

N=10 2.923 2.922602

N=30 2.991 2.990607

N=100 2.984 2.983501

N=500 2.999 2.998871

see that the maximum likelihood will be when θ takes on the value of the maximum order

statistic. This is where the likelihood jumps up from zero to its maximum value. Table 1

reports the MLEs and maximum order statistics of the distributions. Each MLE produces

a close estimate of the true value of θ (within .1) and it is clear to see that when rounded to

three decimal places the MLE and maximum order statistic are equal. It is also interesting

to observe that the larger the sample size the more accurate the estimator and the more

likelihood is placed on the true value.

2 Two-parameter exponential: (1, η), x ≥ η

The second example comes from a two-parameter exponential with (1, η) where x ≥ η. The

likelihood of this one will be bounded above instead of below. In this case the MLE will be

the minimum order statistic (x1:n ) as opposed to the maximum. The likelihood function for

this distribution will be e−(x̄−η) . This produces likelihood graphs that look like a horizontal

flip of the graphs from the previous example.

The simulated data is from a exponential with scale parameter 1 and location parameter

η = 5. The likelihood plots for three different sample sizes are given in Figure 2. A similar

right angle to the first example is seen, making it clear where the maximum is. The best

3
(a) N=10, η̂M LE = 5.124 (b) N=30, η̂M LE = 5.035 (c) N=100, η̂M LE = 5.022

Figure 2: Various Likelihood plots for η̂ with different sample sizes

estimate for η based our data is going to be the minimum order statistic. Shown in Table 2

are the exact estimates and it is clear to see that the MLE and order statistic are once again

equal. The estimates also get closer to the truth as the sample size gets larger. If the sample

size were to increase large enough then the estimates would become unbiased and would be

expectationally equal to the truth.

Table 2: Comparing η̂M LE with x1:n

(0, η) θ̂M LE x1:n

N=10 5.124 5.124168

N=30 5.035 5.035962

N=100 5.022 5.022112

3 Uniform: (θ − 1, θ), θ − 1 ≤ x ≤ θ

The final example is an interesting case since the x’s are bounded above and below. Upon

doing the analysis in the same way as the previous two it appears that there are many MLEs.

Since the likelihood function is 1 at any x between θ − 1 and θ any estimated value of θ

4
between the lower and upper bounds will have a likelihood of 1. As n increases the bounds

of the estimates get closer together. Eventually the estimates will converge to the single

number which will be very close to the true value.

(a) N=2 (b) N=10 (c) N=100

Figure 3: θ = 2

Even without reporting the MLEs it would be possible to compute the bounds of the

estimates from the minimum and maximum observed x’s. The lower bound comes from the

maximum order statistic and the upper bound is the minimum order statistic plus 1.

Table 3: Comparing θ̂M LE s with x1:n and xn:n

(0, η) θ̂M LE x1:n xn:n

N=2 1.989,...,2.397 1.397742 1.988852

N=10 1.957,..,2.029 1.029647 1.956845

N=100 1.988,..., 2.004 1.00464 1.987304

These results show that in some cases of bounded likelihood functions the MLE can be

determined directly from the order statistics. The great thing about this method is that

order statistics are one of the easiest statistics to calculate and exist in every distribution.

You might also like