Lecture2 2023
Lecture2 2023
Where (on
the map)
Measurement am I? Motion model
(sensor) (prediction
model step)
(update step)
Where (on
the map)
am I? 1
Bayes Formula
P ( x, y ) = P ( x | y ) P ( y ) = P ( y | x ) P ( x )
P( y | x) P( x) likelihood prior
P( x y) = =
P( y ) evidence
• Conditional on additional variables:
P ( y | x, z ) P ( x z )
P( x y, z ) =
P( y z )
2
Normalization
P( y | x) P( x)
P( x y ) = = P( y | x) P( x)
P( y )
1
= P( y ) =−1
P( y | x)P( x)
x
p( x y) dx = 1
−
3
Markov Assumption
4
5
Conditional probabilities for this course
Measurement model: p(zt | xt)
Motion model: p(xt | xt-1 , ut )
Belief: Bel(xt) = p(xt | z1:t , u1:t)
Bel(xt)
6
Conditioning on additional variables
P( x) = P( x, z )dz
P( x) = P( x | z ) P( z )dz
P( x y ) = P( x | y, z ) P( z | y ) dz
7
Markov Assumption
8
9
Conditional probabilities for this course
Measurement model: p(zt | xt)
Motion model: p(xt | xt-1 , ut )
Belief: Bel(xt) = p(xt | z1:t , u1:t)
p(zt | xt)
p(xt | xt-1 , ut )
10
Bayes Filters
Bel ( xt ) = P( zt | xt ) P( xt | ut , xt −1 ) Bel ( xt −1 ) dxt −1
Where (on
the map)
Measurement am I? Motion/action
/perception model
(sensor) (prediction
model step)
Where (on
(update step)
the map)
am I?
11
Bayes Filters
Bel ( xt ) = P( zt | xt ) P( xt | ut , xt −1 ) Bel ( xt −1 ) dxt −1
Prediction Update
Bel ( xt −1 ) step Bel ( xt ) step Bel ( xt )
12
Discrete Bayes Filter Algorithm
(sequence of perception and action doesn’t matter)
13
State Estimation Example 1a
• Want to estimate state X={open, closed}
• Suppose a robot obtains measurement z
• What is P(X=open|Z=z1)?
Update
step
14
Reminder: Using Bayes Rule to relate
Causal vs. Diagnostic Reasoning
• P(open|z) is diagnostic.
• P(z|open) is causal.
• Often causal knowledge is easier to
obtain.
Do experiments: count frequencies!
• Bayes rule allows us to use causal
knowledge to diagnose:
P( z | open) P(open)
P(open | z ) =
P( z )
15
16
State Extimation Example 1a contd.
• Let z1 represent a laser rangefinder reading of > 1m
• i.e. Z1 is a discrete random variable, Z1 = {> 1m, ≤ 1m}
• P(z1|open) = 0.6 P(z1|open) = 0.3
• P(open) = P(open) = 0.5 (initial prior)
P( z1 | open) P (open)
P(open | z1 ) =
P( z1 | open) p (open) + P( z1 | open) p (open)
0.6 0.5 2
P(open | z1 ) = = = 0.67
0.6 0.5 + 0.3 0.5 3
P ( zn | x) P ( x | z1, , zn − 1)
P ( x | z1, , zn) =
P ( zn | z1, , zn − 1)
= P ( zn | x) P ( x | z1, , zn − 1)
= 1...n P( z | x) P( x)
i =1...n
i
19
20
Example 1b: Second Measurement
• Let z2 represent a sonar reading < 1m
• P(z2|open) = 0.5 = 1/2 P(z2|open) = 0.6 = 3/5
• P(open|z1)=2/3 (current prior, from 1a)
P( z 2 | open) P(open | z1 )
P(open | z 2 , z1 ) =
P( z 2 | open) P(open | z1 ) + P( z 2 | open) P(open | z1 )
1 2
2 3 5
= = = 0.625
1 2 3 1 8
+
2 3 5 3
22
Typical Actions
23
Example 1c: Closing the door
24
State Transition Diagrams
(for representing Markov chains)
Example 1c:
0.9
0.1 open closed 1
0
26
Reminder: Integrating the Outcome
of Actions
Continuous case:
P( xt | ut ) = P( xt | ut , xt −1 ) P( xt −1 )dxt −1
Discrete case:
P( xt | ut ) = P( xt | ut , xt −1 ) P( xt −1 )
xt −1
27
Example 1c: The Resulting Belief
• Let u represent the action “close door”
• P(open) =P(open|z1,z2)=5/8 (prior, from 1b)
• P(closed) =P(closed|z1,z2)=1-5/8 = 3/8
P (closed | u ) = P (closed | u , xt −1 ) P ( xt −1 )
xt −1
Prediction Update
Bel ( xt −1 ) step Bel ( xt ) step Bel ( xt )
Parametric Non-parametric
representations: representations:
• Gaussian, χ2, etc. • The data itself,
distributions samples from a pdf,
histogram
• Summarized • Sometimes can be
concisely (e.g. μ, σ) summarized by
median, mode, range,
etc.
• May or may not
represent the actual • Can represent non-
data or pdf Gaussian, etc. pdfs
• Makes assumptions! • Doesn’t assume!
32
Parametric pdf representation
-
Univariate
p ( x) ~ N ( , 2 ) :
1 ( x− )2
p ( x) =
1
e
−
2 2 Multivariate
2
33
Conditional probabilities for this course
Measurement model: p(zt | xt)
Motion model: p(xt | xt-1 , ut )
Belief: Bel(xt) = p(xt | z1:t , u1:t)
Update timestep
Bel(xt)
Bel ( xt )
34
Kalman filter (Thrun Chp 3)
• A parametric Bayes Filter that
represents probability density
functions with Gaussians
• Bel(xt) is updated by exploiting the
properties of Gaussians with respect
to the mean, μ, and variance, σ2
• We will be considering discrete-time
Kalman Filters (KFs)
35
General 1-D Kalman Filter
Prediction step: p(xt | xt-1 , ut ) xt = axt-1 + but + εt
Update step: p(zt | xt) zt = cxt + δt
Belief: Bel(xt) = p(xt | z1:t , u1:t) ~ N ( t , t2 )
Bel ( xt ) ~ N ( , 2 )
t t
Update timestep
Bel(xt)
Bel ( xt )
36
Kalman Filter Example 1
37
38
Example: 1-D KF with ct = 1
t = t + k t ( zt − t )
40
41
1-D KF
X ~ N ( , 2 )
Linear Y ~ N ( a + b , a )
2 2
Y = aX + b
Additive N ( , 2
)
d Y ~ N ( a + b, a 2 2 )
YaX + b
44
45
46
Kalman Filter algorithm in 1-D
47
Recall Kalman Filter Example 1
48
Example: 1-D KF with ct = 1
Start with an update step
N ( zt , qt2 )
bel ( xt ) ~ N ( t , t2 )
bel ( xt ) t = kt zt + (1 − kt ) t
bel ( xt ) =
t
2
= (1 − k t ) t
2
t2
with kt = 2
t + qt2
Here: ct = 1 49
Example: 1-D KF with ct = 1
Next timestep
bel ( xt )
bel ( xt −1 )
bel ( xt )
N ( zt , qt2 )
50