0% found this document useful (0 votes)
123 views8 pages

Chapter 1.1 Mle

1. This document discusses point estimation and the maximum likelihood estimation (MLE) method. 2. MLE chooses estimates that maximize the likelihood function, which is the product of the probability density function evaluated at the observed sample values. 3. Several examples show how to derive the MLE for parameters in different probability distributions using likelihood functions and maximization. The MLE is often the sample mean, median, or other statistic depending on the distribution.

Uploaded by

Yen Peng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
123 views8 pages

Chapter 1.1 Mle

1. This document discusses point estimation and the maximum likelihood estimation (MLE) method. 2. MLE chooses estimates that maximize the likelihood function, which is the product of the probability density function evaluated at the observed sample values. 3. Several examples show how to derive the MLE for parameters in different probability distributions using likelihood functions and maximization. The MLE is often the sample mean, median, or other statistic depending on the distribution.

Uploaded by

Yen Peng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

WEEK1

CHAPTER 1:Estimation

1.1 Point Estimation

We may have several different choices for the point estimator of a parameter. For example,
if we wish to estimate the mean of a population, we might consider the sample mean, the
sample median or may be even the average of the smallest and the largest observations in
the sample as point estimator. Therefore, we need to examine their statistical properties
and develop some criteria for comparing estimators.

1.2 Methods of Point Estimations

The methods of estimation to be discussed here are maximum likelihood estimation and
estimation by methods of moments.

Some might asked, why do we need to estimate parameters? A brief description to answer
the question is as follows:

Let X be a random variable with probability function f(x,θ), where θ is a parameter in a


parameter space Ω. If θ is known, then the probability function would be known and
consequently will be able to calculate the probabilities related to X. Usually the θ is
unknown, then the objective is to estimate θ on the basis of a random sample of size n,
X 1 , X 2 ,..., X n from f(x,θ). Of course we would like to get a “good estimate” of θ.

1.2.1 Maximum Likelihood Estimation (MLE)

The most widely accepted principle is the principle of Maximum Likelihood. The principle
involved here is choosing an estimate of the parameter, θ through the process of
n
maximizing the likelihood function L( x)   f ( xi ,  ) which depends on the sample values
i 1

(observed values). We will try to illustrate this method through an example.

Note:
 The method of maximum likelihood cannot be applied without the knowledge of
underlying distribution.

 Joint pdf’s and likelihood functions look the same, but the two are interpreted
differently. A joint pdf defined for a set of n random variables is a multivariate
function of those random variables. In contrast, L is a function of  ; it should be
considered a function of xi ' s .
 There are a few situations where the equations

dL( ) d ln L( )
 0 or  0 are not meaningful and do not yield solution for ˆ .
d d
In those cases, the MLE often turns out to be an order statistic, for reasons having to do
with range of the random variable.

Definition 1.1
Let X 1 , X 2 ,..., X n be a random sample from f ( x; ) , where θ is an unknown parameter.

The likelihood function , L( x) , is the product of the pdf f ( x; ) evaluated at n data
points. That is,

n
L( x)   f ( xi ,  )
i 1

Example 1.1

A random sample of size n, X 1 , X 2 ,..., X n are taken from B(1,p) distribution with observed

values, x1 , x2 ,..., xn . Determine the maximum likelihood estimate of the parameter p.

Solution

Example 1.2

Consider a Poisson distribution with probability function

e   x
f  x;    , x  0,1, 2,...
x!
Suppose that a random sample x1 , x2 ,..., xn is taken from the distribution. What is the

maximum likelihood estimate of  ?

Solution
Example 1.3

It is known that a sample of 12, 11.2, 13.5, 12.3, 13.8, 11.9 comes from a population with
probability function

 
 , x 1
f ( x; )   x 1
0
 , otherwise

where  > 0. Find the maximum likelihood estimate of  .

Solution
Example 1.4

Based on the random sample Y1 = 6.3, Y2 = 1.8, Y3 = 14.2 and Y4 = 7.6, use the method of
maximum likelihood to estimate the parameter  in the uniform probability density function
1
f ( y,  )  , 0  y 

Solution
Example 1.5

A random sample of size n is taken from the probability density function

1
f ( x,  ) 2 x 2 , 0 y


Find an expression for  , the maximum likelihood estimator for .

Solution
Note: Finding MLEs When More Than One Parameter is Unknown

If a family of probability models is indexed by two or more unknown parameters, finding

MLEs for the  i ' s requires the solution of a set of k simultaneous equations. If k = 2, for
example, we would need to solve the system

d ln L(1 ,  2 )
0
d1

d ln L(1 ,  2 )
0
d 2

Example 1.6

Suppose a random sample X 1 , X 2 ,..., X n is taken from a normal distribution N (  ,  2 ) .

Find the maximum likelihood estimators for  and  2 .

Solution
Theorem 1.1

Let ˆ  ˆ( x) be the MLE of  on the basis of observed values x1 , x2 ,..., xn of the random

sample X 1 , X 2 ,..., X n from the probability function f ( x; ) ,      . Also let  *  g ( )

be one-to-one function defined on  onto  *   . Then the MLE of  * , ˆ * ( x), is given by

ˆ * ( x)  g (ˆ( x)) .

Hence,
From Example 1.1, since the MLE of p is x , therefore according to Theorem 1, the MLE of
p( 1  p ) is x( 1  x ) .

You might also like