Hidden Markov Models Common Probabilities HMM Diagram
Hidden Markov Models Common Probabilities HMM Diagram
edu
Tanishka Singh common probabilities Spring 2019
HMM Diagram:
si sj
aij
Latent Q: { q1 ... qt qt+1 qt+2 ... qT }
bi (ot ) bj (ot+1 )
o1:t ot+2:T
P (qt = si , qt+1 = sj , O)
ξt (i, j) = P (qt = si , qt+1 = sj |O) = (1)
P (O)
P (O)ξt (i, j) = P (qt = si , qt+1 = sj , O) = P (qt = si , qt+1 = sj , o1:t , ot+1:T ) (2)
P (O)ξt (i, j) = P (qt = si , qt+1 = sj , o1:t , ot+1:T ) (3)
P (O)ξt (i, j) = P (o1:t , qt = si , ot+1:T , qt+1 = sj ) (4)
P (O)ξt (i, j) = P (o1:t , qt = si ) P (ot+1:T , qt+1 = sj |o1:t , qt = si ) (5)
| {z }
αt (i)
1
Nidhal Selmi, Hidden Markov Models [email protected]
Tanishka Singh common probabilities Spring 2019
Marginalizing over all next possible states j (with transitions and observations) given state i.
N
X
βt (i) = P (qt+1 = sj |qt = si )P (ot+1 |qt+1 = sj )P (ot+2:T |qt+1 = sj )
j=1
N
X
βt (i) = aij bj (ot+1 )βt+1 (j)
j=1
P (qt = sj , O)
γt (j) = P (qt = sj |O) = (Bayes)
P (O)
P (O)γt (j) = P (qt = sj , O)
P (O)γt (j) = P (qt = sj , o1:t , ot+1:T ) (split O)
P (O)γt (j) = P (o1:t , qt = sj , ot+1:T )
P (O)γt (j) = P (o1:t , qt = sj )P (ot+1:T |o1:t , qt = sj ) (Bayes)
P (O)γt (j) = P (o1:t , qt = sj )P (ot+1:T |qt = sj ) (separate o1:t )
αt (j)βt (j)
γt (j) =
P (O|λ)
References
[1] L. R. Rabiner, ”A tutorial on hidden Markov models and selected applications in speech
recognition,” in Proceedings of the IEEE, vol. 77, no. 2, pp. 257-286, Feb. 1989.