04 Exact Inference
04 Exact Inference
University of Freiburg
Machine Learning Lab
counterexample of a polytree
p(X = x, O1 = o1 , O2 = o2 )
=
p(O1 = o1 , O2 = o2 ) U2 U3
p(X = x, O1 = o1 , O2 = o2 )
=R
p(X = x′ , O1 = o1 , O2 = o2 )dx′
with X U4
p (X
Z = Zx, O1 = o1 , O2 = o2 ) =
· · · p(X = x, O1 = o1 , O2 = o2 , U1 = u1 , . . . , U5 O1
U7 = u7 ) du1 du2 du3 du4 du5 du6 du7
→ continue on blackboard U6 O2 U7
Examples
◮ if X is distributed w.r.t. the Dirac distribution, X can only take the value 0
◮ if Y is distributed w.r.t. 0.3 · δ(y − 2) + 0.7 · δ(y + 5.1), Y will take the
value 2 with probability of 0.3 and −5.1 with probability of 0.7
m −1 +1
3 1 result of exam
P (M = m) 4 4
1
4
if e = c + m
1
2
if e = c
P (E = e|C = c, M = m) = 1
if e = c − m
4
0 otherwise
Example motivates a generic algorithm to calculate max~u log p(U = ~u, O = ~o)
known as max-sum algorithm
How do we calculate arg max~u log p(U = ~u, O = ~o) with the max-sum
algorithm?
Example:
f1 (u1 , u2 ) u2 = 0 u2 = 1
f1 f2
1 1
u1 = 0 4 8
1 1
U1 U2 U3 u1 = 1 8 2
◮ no
◮ loopy belief propagation (Frey and MacKay, 1998)
◮ EM/ECM algorithm
◮ variational methods
◮ Monte Carlo methods