0% found this document useful (0 votes)
51 views14 pages

Maximum Likelihood Method-Red1eco

This document summarizes a presentation on maximum likelihood estimation (MLE). [1] It defines MLE as a method for estimating population parameters that selects values maximizing the probability of obtaining observed sample data. [2] It discusses how MLE differs from ordinary least squares (OLS) in that MLE maximizes probability rather than minimizing residuals, and does not require the assumption that errors are normally distributed. [3] While MLE can be more mathematically complex than OLS, it allows for more flexibility in specifying the distribution of the dependent variable.

Uploaded by

red1eco
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views14 pages

Maximum Likelihood Method-Red1eco

This document summarizes a presentation on maximum likelihood estimation (MLE). [1] It defines MLE as a method for estimating population parameters that selects values maximizing the probability of obtaining observed sample data. [2] It discusses how MLE differs from ordinary least squares (OLS) in that MLE maximizes probability rather than minimizing residuals, and does not require the assumption that errors are normally distributed. [3] While MLE can be more mathematically complex than OLS, it allows for more flexibility in specifying the distribution of the dependent variable.

Uploaded by

red1eco
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 14

Econometrics

Topic Name: MLE


Presenter: REDWAN AHMED
Email:[email protected]

Dept. of Economics
Shahjalal University of Science & Technology, Sylhet, Bangladesh.
Content:
 Regression Analysis.

 Nature of U term in functions.

 Estimation method: OLS & MLE.

 MLE: Definition, Principles.

 MLE: Mean, Variance &  MLE


 MLE: Properties & Criticism.

 Comparison between OLS & MLE.


Regression Analysis
 The Consumption function: y i   1   2 xi  u; i 
ui  0
where; y i  /  E ( y i xi )   1   2 xi

 The Production function:


ln yi  1   i ln xi  u; i ui  0
where;
ln y i  E (ln y i xi )   1   i ln xi

 Aim: estimation of parameters


 
 1  2
Nature of U term in first functions

 In consumption function: u ~normal

yi ŷi ui
3000 2850 150

2850 2850 0

2700 2850 -150


Estimation method: OLS

 As u ~normal, then we can use OLS to estimate parameters.

Therefore; ̂  ( xx) xy


1
Nature of U term in second functions

 In production function: u ~ normal

ln ŷ i ln yi ui

3000 2850 150

3000 3000 0

3000 2700 300


Estimation method: MLE

 As u ~ normal, then we can not use OLS to estimate parameters.

 Alternative method for estimating parameters: MLE

 Need to specify the distribution of dependent variable.


MLE: Maximum Likelihood estimation
 Definition:
A statistical method for estimating population parameters (as the mean and
variance) from sample data that selects as estimates those parameter values
maximizing the probability of obtaining the observed data.

Estimate Obtain
Parameters & Maximize observed
use these to Probability data
obtain
MLE: Principles
 A sample x1, x2, …, xn of n iid observations, coming from a distribution with df ƒ(xi).
 Joint density function;

 Likelihood function;

 Maximum likelihood estimator (MLE) of θ:


MLE: Maximum Likelihood estimation

 Mean:  MLE  x n

1
 Variance:  MLE
2
= n
  xi    2 but  OLS
2

1
  xi  
2

(n  k )

  MLE :  
y x
i i

x
MLE 2
i
MLE: Properties

 Consistency: Plim ˆ   0 i.e.


the estimator converges in probability to the value being estimated.
ˆ
Asymptotic normality:  ~ N[ 0 .{I( 0 )} ]
-1

as the sample size increases, the distribution of the MLE tends to the normal
distribution.
 Asymptotic efficiency: ˆ is Asymptotically efficient and achieves the Cramer-
Rao lower bound for consistent estimators.

 Invariance: the maximum likelihood estimator of  0  c( o ) is c(ˆ) if c( o )


is a continuous & continuously differentiable function.
MLE: Criticism
Downward biasness:  MLE is biased downward. If the sample size
2

becomes larger, it becomes unbiased.

 For some problems, there may be multiple estimates that maximize the
likelihood. For other problems, no maximum likelihood estimate exists.

 For many models, a maximum likelihood estimator can be found as an


explicit function of the observed data x1, …, xn. For many other models,
however, no closed-form solution to the maximization problem is known or
available
Comparison between OLS & MLE

MLE OLS
To maximize the provability to get observed To minimize least square of residual term.
data.
The estimator is biased. However, if the The estimator is unbiased..
sample size n gets larger The estimator
tends to be equal to that of OLS.
It is slightly mathematically complex. It is slightly mathematically simple.

The specification of the distribution of Y is The specification of the distribution of U is


the most crucial. It may follow any type of the most crucial. It is assumed to be
distribution. If it is normally distributed then normally distributed.
the MLE & OLS estimators are identical.
The end

Thank you.

You might also like