0% found this document useful (0 votes)
76 views9 pages

Hidden Markov Model

The document introduces Hidden Markov Models (HMM) and provides an example to estimate the posterior probability that a person is happy or sad based on their observed activities. It defines the unknown states (happy/sad), observations (watching TV, sleeping, crying, Facebook), and wants to calculate probabilities like P(happy|watching TV). The document then explains the HMM components - prior, likelihood, and transition models. It describes using a recursive algorithm to calculate the state distribution P(Xt|Y1:t) given observations over time to address dynamic systems modeled by HMMs.

Uploaded by

Mohit Goel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views9 pages

Hidden Markov Model

The document introduces Hidden Markov Models (HMM) and provides an example to estimate the posterior probability that a person is happy or sad based on their observed activities. It defines the unknown states (happy/sad), observations (watching TV, sleeping, crying, Facebook), and wants to calculate probabilities like P(happy|watching TV). The document then explains the HMM components - prior, likelihood, and transition models. It describes using a recursive algorithm to calculate the state distribution P(Xt|Y1:t) given observations over time to address dynamic systems modeled by HMMs.

Uploaded by

Mohit Goel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Hidden Markov Model

By: Mohit Goel


Assistant Professor
Lovely Professional University, Jalandhar

Hidden Markov Model


Assume you have a little robot that is trying to estimate the posterior
probability that you are happy or sad. Given that robot has observed whether
you are watching Games of thrones(w), sleeping(s), crying(c) or doing
Facebook(f).
Let the unknown states be X=h if you are happy and X=s if you are sad.
Let Y denotes the observations which can be w, s, c or f.
We want to answer about queries such as:
P( X = H Y = w) =?

Hidden Markov Model


Assume that an expert is compiled the following prior and likelihood models:

P( X Y ) = ?
P(X)

Happy(H)

P(Y X )

Sad(S)

Happy(H)
Sad(S)

What is the probability that you are happy if you are watching game of throne.

P( X = H Y = w) =

P(Y = w X = H ) P( X = H )
P(Y = w X = H ) P( X = H ) + P(Y = w X = S ) P ( X = S )

Hidden Markov Model


But what if instead of an absolute prior, we have a transition prior. That is we
assume a dynamic system:
Happy(H)
X1

X2

Sad(S)

Happy(H)
Sad(S)

Given a history of observations, say Y1= w, Y2= f, Y3= c, we want to compute


the posterior distribution that you are happy at step three i.e. we want to
estimate:

Hidden Markov Model


In general we assume that we have an initial distribution P(X0), a transition
model P(XtXt-1) and an observation model P(Yt Xt).
P(X0)=

P(XtXt-1)=

P(Yt Xt) =

Happy(H)

Sad(S)

P(Y1X1) = P(Y2X2)

Hidden Markov Model


Our goal is to compute distribution for all t
P(XtY1:t)=P(XtY1, Y2,. . . . . ,Yt)

We derive a recursive algorithm to compute P(XtY1:t) assuming that we have


input P(Xt-1Y1:t-1). This recursion has two steps: 1. Prediction and 2. Bayesian
update

Hidden Markov Model


Step 1. Prediction
We compute state prediction: P(XtY1:t-1)

P(XtY1:t-1)=

P( X
X t 1

X t 1 ) P( X t 1 Y1:t 1 )

Hidden Markov Model


Step 2. Apply Bayes rule to find: P(XtY1:t)
P(XtY1:t) =

P (Yt X t ) P ( X t Y1:t 1 )

P(Y

Xt

X t ) P ( X t Y1:t 1 )

Speech Recognition using HMM

You might also like