0% found this document useful (0 votes)
1 views15 pages

SP14 CS188 Lecture 13 - Markov Models

The document discusses Markov Models, which are used to reason over time or space in various applications such as speech recognition and robot localization. It explains the concepts of state, transition probabilities, and joint distributions, along with examples like weather prediction and stationary distributions. The document also outlines the chain rule and provides mathematical formulations related to Markov chains.

Uploaded by

jhasan2026
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views15 pages

SP14 CS188 Lecture 13 - Markov Models

The document discusses Markov Models, which are used to reason over time or space in various applications such as speech recognition and robot localization. It explains the concepts of state, transition probabilities, and joint distributions, along with examples like weather prediction and stationary distributions. The document also outlines the chain rule and provides mathematical formulations related to Markov chains.

Uploaded by

jhasan2026
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

CS 188: Artificial Intelligence

Markov Models

Instructors: Dan Klein and Pieter Abbeel --- University of California, Berkeley
[These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.]
Reasoning over Time or Space

 Often, we want to reason about a sequence of observations


 Speech recognition
 Robot localization
 User attention
 Medical monitoring

 Need to introduce time (or space) into our models


Markov Models
 Value of X at a given time is called the state

X1 X2 X3 X4

 Parameters: called transition probabilities or dynamics, specify how the state


evolves over time (also, initial state probabilities)
 Stationarity assumption: transition probabilities the same at all times
 Same as MDP transition model, but no choice of action
Joint Distribution of a Markov Model
X1 X2 X3 X4

 Joint distribution:

 More generally:

 Questions to be resolved:
 Does this indeed define a joint distribution?
 Can every joint distribution be factored this way, or are we making some assumptions
about the joint distribution by using this factorization?
Chain Rule and Markov Models
X1 X2 X3 X4

 From the chain rule, every joint distribution over can be written as:

 Assuming that
and

results in the expression posited on the previous slide:


Chain Rule and Markov Models
X1 X2 X3 X4

 From the chain rule, every joint distribution over can be written as:

 Assuming that for all t:

gives us the expression posited on the earlier slide:


Markov Models Recap
 Explicit assumption for all t :
 Consequence, joint distribution can be written as:

 Additional explicit assumption: is the same for all t


Example Markov Chain: Weather
 States: X = {rain, sun}

 Initial distribution: 1.0 sun

 CPT P(Xt | Xt-1): Two new ways of representing the same CPT
Xt-1 Xt P(Xt|Xt-1) 0.9
0.3
sun sun 0.9 0.9
sun sun
sun rain 0.1 rain sun 0.1
rain sun 0.3 0.3
rain rain
rain rain 0.7 0.7 0.7
0.1
Example Markov Chain: Weather
 Initial distribution: 1.0 sun 0.3
0.9

rain sun

0.7
0.1

 What is the probability distribution after one step?


Example Run of Mini-Forward Algorithm
 From initial observation of sun

P(X1) P(X2) P(X3) P(X4) P(X)


 From initial observation of rain

P(X1) P(X2) P(X3) P(X4) P(X)


 From yet another initial distribution P(X1):

P(X1) P(X) [Demo: L13D1,2,3]


Stationary Distributions

 For most chains:  Stationary distribution:


 Influence of the initial distribution  The distribution we end up with is called
gets less and less over time. the stationary distribution of the
 The distribution we end up in is chain
independent of the initial distribution  It satisfies
Example: Stationary Distributions
 Question: What’s P(X) at time t = infinity?
X1 X2 X3 X4

Xt-1 Xt P(Xt|Xt-1)
sun sun 0.9
sun rain 0.1
rain sun 0.3
rain rain 0.7

Also:
Random Variable X in {a, b, c}

Transition matrix:

13
P∞(a)= P(a|a)X P∞(a)+ P(a|b)X P∞(b)+ P(a|c)X P∞(c)
=> P∞(a)=2/5 P∞(a)+1/5 P∞(b)+1/5 P∞(c)
=>5 P∞(a)=2 P∞(a)+ P∞(b)+ P∞(c)
=>3 P∞(a)- P∞(b)- P∞(c)=0 -------- (i)

P∞(b)= P(b|a)X P∞(a)+ P(b|b)X P∞(b)+ P(b|c)X P∞(c)


=> P∞(b)=2/5 P∞(a)+3/5 P∞(b)+2/5 P∞(c)
=>5 P∞(b)= 2P∞(a)+ 3P∞(b)+ 2P∞(c)
=>2P∞(a)- 2P∞(b)+ 2P∞(c)=0 14
3 P∞(a)- P∞(b)- P∞(c)=0 -------- (i)

P∞(a)- P∞(b)+ P∞(c)=0-------- (ii)

P∞(a)+ P∞(b)+ P∞(c)=1--------(iii)

P∞(a)=1/4, P∞(b)=1/2, P∞(c)=1/4

15

You might also like