Lecture 6
Lecture 6
f (x | λ) = λ exp(−λx), x ≥ 0
f (x | λ) = λ exp(−λx), x ≥ 0
• Given iid data, D = {x1 , · · · , xn }, we need to
estimate λ.
f (x | λ) = λ exp(−λx), x ≥ 0
• Given iid data, D = {x1 , · · · , xn }, we need to
estimate λ.
• The likelihood function is
n
Y
L(λ | D) = λ exp(−λxi )
i=1
i=1
2 2
i=1
2 2
M T i i
1
P
x = [x , · · · , x ] , x ∈ {0, 1}, i x =1
M T i i
1
P
x = [x , · · · , x ] , x ∈ {0, 1}, i x =1
• Here, p = (p1 , · · · , pM )T is the parameter vector.
PR NPTEL course – p.40/123
• Now the problem of estimating the parameters, pi ,
becomes the following.
D = {x1 , · · · , xn }
j
where xi = [x1i , ···, xM
i ]T
with xi ∈ {0, 1} and
xji = 1, ∀i.
P
j
D = {x1 , · · · , xn }
j
where xi = [x1i , ···, xM
i ]T
with xi ∈ {0, 1} and
xji = 1, ∀i.
P
j
• We know the probability mass function of x and we
need to derive ML estimates for parameters pi .
= xji
i=1 j=1
= xji
i=1 j=1
= n
= xji
i=1 j=1
= n
P j
where last step follows because j xi = 1, ∀i.
PR NPTEL course – p.62/123
• Thus, we get the final ML estimate for pj as
n
1 X j
p̂j = xi
n i=1
f (D | θ)f (θ)
f (θ | D) = R
f (D | θ)f (θ) dθ
Q
where f (D | θ) = i f (xi | θ) is the data likelihood
that we considered earlier.
f (D | θ)f (θ)
f (θ | D) = R
f (D | θ)f (θ) dθ
Q
where f (D | θ) = i f (xi | θ) is the data likelihood
that we considered earlier.
• In the above expression for f (θ | D), the denominator
is not a function of θ . It is a normalizing constant and
when we do not need its details, we will denote it by
Z.