Module 2.1 Slides PDF
Module 2.1 Slides PDF
c University of New South Wales
School of Risk and Actuarial Studies
1/47
Parameter Estimation
Parameter estimation
Definition of an estimator
The method of moments
Example & exercise
2/47
Parameter Estimation
Parameter estimation
Definition of an estimator
Definition of an Estimator
3/47
Parameter Estimation
Parameter estimation
Definition of an estimator
Definition of an Estimator
- Any statistic, i.e., a function T (X1 , X2 , . . . , Xn ), that is a
function of observable random variables and whose values are
used to estimate (), where () is some function of the
parameter , is called an estimator of ().
- For example:
1 Pn
T (X1 , X2 , . . . , Xn ) = X = Xj , estimator;
n j=1
= 0.23,
b point estimate.
1 (1 , 2 , . . . , k ) = E [X ] , 2 (1 , 2 , . . . , k ) = E X 2 ,
h i
. . . , k (1 , 2 , . . . , k ) = E X k .
distribution.
b =E [X ] = x
b2 =E X 2 (E [X ])2
n n
1X 2 1X n1 2
= 2
xj x = (xj x)2 = s ,
n n n
j=1 j=1
n 2
(xj x )
P
* using s 2 = j=1n1 is the sample variance.
2
Note: E b 6= 2 (biased estimator), more on this later.
9/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation
10/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation
11/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation
Likelihood function
n
Y
L (; x1 , x2 , . . . , xn ) = fX (xj |) .
j=1
12/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation
13/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation
D (L) = 0,
2L 2L
2 1 2
1
h1
h1 h2 h2 < 0,
2L 2L
1 2 22
Log-Likelihood function
I Generally, maximizing the log-likelihood function is easier.
I Not surprisingly, we define the log-likelihood function as:
` (1 , 2 , . . . , k ; x) = log (L (1 , 2 , . . . , k ; x))
Y n
= log fX (xj |)
j=1
n
X
= log (fX (xj |)) .
j=1
MLE procedure
17/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation
18/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation
Solution:
2. Its log-likelihood function is:
n 2 !!
X 1 1 xk
` (, ; x) = log exp
2 2
i=1
n
n 1 X
=n log() log(2) 2 (xk )2 .
2 2
k=1
* using log(1/a)
= log(a1 ) = log(a), with a =
and log(1/ b) = log(b 0.5 ) = 0.5 log(b), with b = 2.
Take the derivative w.r.t. and and set that equal to zero.
21/47
Parameter Estimation
Examples: MME and MLE
n
1 X
` (, ; x) = 2 (xk ) = 0
k=1
n
X
xk n = 0
k=1
b=x
Pn
n (xk )
` (, ; x) = + k=1 3 =0
n
P
(xk )
k=1
n=
2
n
1X
b2 = (xk x)2 .
n
k=1
24/47
Parameter Estimation
Examples: MME and MLE
Solution:
1. Equate sample moments to population moments:
(1)
1 = MX (t) = E [X ] = x
t=0
n
(2)
X xi2
2 = MX (t) = E X2 = .
t=0 n
i=1
25/47
Parameter Estimation
Examples: MME and MLE
27/47
Parameter Estimation
Examples: MME and MLE
1.4
1.2
0.8
L(; x)
0.6
0.4
0.2
0
x(4) x(2) x(1) x(3)
30/47
Parameter Estimation
Examples: MME and MLE
31/47
Parameter Estimation
Estimator III: Bayesian estimator
Introduction
Introduction
We have seen:
I Method of moment estimator:
Idea: first k moments of the estimated special distribution and
sample are the same.
I Maximum likelihood estimator:
Idea: Probability of sample given a class of distribution is the
highest with this set of parameters.
L(;
b ) 0, for every ;
b
L(;
b ) = 0, when = .
b
35/47
Parameter Estimation
Estimator III: Bayesian estimator
Bayesian estimation
36/47
Parameter Estimation
Estimator III: Bayesian estimator
Bayesian estimation
37/47
Parameter Estimation
Estimator III: Bayesian estimator
Bayesian estimation
Implying: minimizing B ()
b is equivalent to minimizing r (|x)
b for
all x.
38/47
Parameter Estimation
Estimator III: Bayesian estimator
Bayesian estimation
fX | (x1 , x2 , . . . , xT | ) ()
(|x) = R (1)
fX | (x1 , x2 , . . . , xT | ) () d
fX | (x1 , x2 , . . . , xT | ) ()
=
fX (x1 , x2 , . . . , xT )
41/47
Parameter Estimation
Estimator III: Bayesian estimator
Bayesian estimation
Estimation procedure:
1. Find posterior density using (1) (difficult/tidious integral!) or
(2).
2. Compute the Bayesian estimator (using the posterior) under a
given loss function (under mean squared loss function: take
expectation of the posterior distribution).
42/47
Parameter Estimation
Examples: Bayesian estimation
(a + b)
() = a1 (1 )b1 .
(a) (b)
(|x) fX | (x1 , x2 , . . . , xT | ) ()
(a + b)
= (a+s)1 (1 )(b+T s)1 (3)
(a) (b)
Z 1
fX (x) = fX | (x | ) ()d
0
Z 1
(a + b)
= (a+s)1 (1 )(b+T s)1 d
0 (a) (b)
(a + b) (a + s) (b + T s)
= .
(a) (b) (a + b + T )
45/47
Parameter Estimation
Examples: Bayesian estimation
Using the marginal from the previous slide, we can derive the
posterior density,
fX | (x | ) ()
(|x) =
fX (x)
s (1 )T s (a+b)
(a)(b)
a1 (1 )b1
= (a+b) (a+s)(b+T s)
(a)(b) (a+b+T )
(a + b + T )
= (a+s)1 (1 )(b+T s)1 ,
(a + s) (b + T s)
46/47
Parameter Estimation
Examples: Bayesian estimation
2. The mean of this r.v. with the above posterior density is then:
a+s
bB = E[|X = x] = E [ Beta (a + s, b + T s)] =
a+b+T
gives the Bayesian estimator of .
We note that we can write the Bayesian estimator as a weighted
average of the prior mean (which is a/(a + b)) and the sample
mean (which is s/T ) as follows:
B T s a+b a
=
b + .
a+b+T T}
| {z a+b+T a+b
| {z } | {z } | {z }
weight sample sample mean weight prior prior mean
47/47