Lecture2 Probability
Lecture2 Probability
P(a ∧ b)
P(a | b) =
P(b)
P(a ∧ b) = P(b)P(a | b)
P(a ∧ b) = P(a)P(b | a)
probability distribution
P(Flight) = ⟨0.6, 0.3, 0.1⟩
independence
the knowledge that one event occurs does
not affect the probability of the other event
independence
P(a ∧ b) = P(a)P(b | a)
independence
P(a ∧ b) = P(a)P(b)
independence
P( ) = P( )P( )
1 1 1
= ⋅ =
6 6 36
independence
P( ) ≠ P( )P( )
1 1 1
= ⋅ =
6 6 36
independence
P( ) ≠ P( )P( | )
1
= ⋅0=0
6
Bayes' Rule
P(a ∧ b) = P(b) P(a | b)
P(b) P(a | b)
P(b | a) =
P(a)
Bayes' Rule
P(a | b) P(b)
P(b | a) =
P(a)
AM PM
(.8)(.1)
=
.4
= 0.2
Knowing
we can calculate
we can calculate
we can calculate
we can calculate
PM
AM
R = rain R = ¬rain
C = cloud 0.08 0.32
C = ¬cloud 0.02 0.58
Probability Rules
Negation
P( ¬a) = 1 −
P(a)
Inclusion-Exclusion
P(C = cloud)
= P(C = cloud, R = rain) + P(C = cloud, R = ¬rain)
= 0.08 + 0.32
= 0.40
Conditioning
Maintenance
{yes, no}
Train
{on time, delayed}
Appointment
{attend, miss}
Rain none light heavy
{none, light, heavy} 0.7 0.2 0.1
Rain
{none, light, heavy}
R yes no
R M on time delayed
Maintenance none yes 0.8 0.2
{yes, no} none no 0.9 0.1
light yes 0.6 0.4
light no 0.7 0.3
Train heavy yes 0.4 0.6
{on time, delayed} heavy no 0.5 0.5
Maintenance
{yes, no}
Train
{on time, delayed}
T attend miss
Appointment on time 0.9 0.1
{attend, miss} delayed 0.6 0.4
Rain
{none, light, heavy}
Maintenance
{yes, no}
Train
{on time, delayed}
Appointment
{attend, miss}
Rain Computing Joint Probabilities
{none, light, heavy}
Maintenance
{yes, no}
Train
{on time, delayed}
Appointment
{attend, miss}
P(light)
P(light)
Rain Computing Joint Probabilities
{none, light, heavy}
Maintenance
{yes, no}
Train
{on time, delayed}
Appointment
{attend, miss}
P(light, no)
Maintenance
{yes, no}
Train
{on time, delayed}
Appointment
{attend, miss}
P(light, no, delayed)
Maintenance
{yes, no}
Train
{on time, delayed}
Appointment
{attend, miss}
P(light, no, delayed, miss)
Maintenance
{yes, no}
Train
{on time, delayed}
Appointment
{attend, miss}
R = none
Rain
{none, light, heavy}
R yes no
Maintenance
{yes, no} R M on time delayed
none yes 0.8 0.2
none no 0.9 0.1
light yes 0.6 0.4
Train light no 0.7 0.3
heavy yes 0.4 0.6
{on time, delayed} heavy no 0.5 0.5
Maintenance R = none
{yes, no}
M = yes
T = on time
Train A = attend
{on time, delayed}
T attend miss
Appointment on time 0.9 0.1
{attend, miss} delayed 0.6 0.4
R = none
M = yes
T = on time
A = attend
R = light R = light R = none R = none
M = no M = yes M = no M = yes
T = on time T = delayed T = on time T = on time
A = miss A = attend A = attend A = attend
Maintenance
{yes, no}
Train
{on time, delayed}
Appointment
{attend, miss}
R = light
T = on time
R yes no
Maintenance
{yes, no} R M on time delayed
none yes 0.8 0.2
none no 0.9 0.1
light yes 0.6 0.4
Train light no 0.7 0.3
heavy yes 0.4 0.6
{on time, delayed} heavy no 0.5 0.5
Maintenance R = light
{yes, no}
M = yes
T = on time
Train A = attend
{on time, delayed}
T attend miss
Appointment on time 0.9 0.1
{attend, miss} delayed 0.6 0.4
Rain R = light
{none, light, heavy} M = yes
T = on time
A = attend
Maintenance
{yes, no} R M on time delayed
none yes 0.8 0.2
none no 0.9 0.1
light yes 0.6 0.4
Train light no 0.7 0.3
heavy yes 0.4 0.6
{on time, delayed} heavy no 0.5 0.5
Rain R = light
{none, light, heavy} M = yes
T = on time
A = attend
Maintenance
{yes, no} R M on time delayed
none yes 0.8 0.2
none no 0.9 0.1
light yes 0.6 0.4
Train light no 0.7 0.3
heavy yes 0.4 0.6
{on time, delayed} heavy no 0.5 0.5
Uncertainty over Time
Xt: Weather at time t
Markov assumption
the assumption that the current state
depends on only a finite fixed number of
previous states
Markov Chain
Markov chain
a sequence of random variables where the
distribution of each variable follows the
Markov assumption
Transition Model
Tomorrow (Xt+1)
0.8 0.2
Today (X t )
0.3 0.7
X0 X1 X2 X3 X4
Sensor Models
Hidden State Observation
robot's position robot's sensor data
weather umbrella
Hidden Markov Models
Hidden Markov Model
a Markov model for a system with hidden
states that generate some observed event
Sensor Model
Observation (E t )
0.2 0.8
State (X t )
0.9 0.1
sensor Markov assumption
the assumption that the evidence variable
depends only the corresponding state
X0 X1 X2 X3 X4
E0 E1 E2 E3 E4