lecture_state_estimation_intro
lecture_state_estimation_intro
Arunkumar Byravan
CSE 490R – Lecture 3
Interaction loop
• Sense: Receive sensor
data and estimate Sens
“state” Plan
e
• Plan: Generate long-
term plans based on
state & goal
Act
• Act: Apply actions to
the robot
State Byron Boots – Statistical Robotics at GaTech
• E.g.
Gaussian PDF
Joint and Conditional Probability
• P(X = x and Y = y) = P(x, y)
P( x) 1
x
p( x) dx 1
P ( x) P ( x, y ) p ( x) p ( x, y ) dy
y
P( x) P( x | y ) P( y ) p ( x) p ( x | y ) p( y ) dy
y
Events
• P(+x, +y) ?
X Y P
• P(+x) ? +x +y 0.2
+x -y 0.3
• P(-y OR +x) ? -x +y 0.4
-x -y 0.1
• Independent?
Marginal Distributions
X P
X Y P +x
-x
+x +y 0.2
+x -y 0.3
-x +y 0.4 Y P
-x -y 0.1 +y
-y
Conditional Probabilities
• P(+x | +y) ?
X Y P
+x +y 0.2
+x -y 0.3 • P(-x | +y) ?
-x +y 0.4
-x -y 0.1
• P(-y | +x) ?
Bayes Formula
P ( x, y ) P ( x | y ) P ( y ) P ( y | x ) P ( x )
P ( y | x) P ( x) likelihood prior
P( x y )
P( y ) evidence
P ( z | open) P (open)
P (open | z )
P ( z | open) p (open) P ( z | open) p ( open)
0.6 0.5 2
P (open | z ) 0.67
0.6 0.5 0.3 0.5 3
P( y | x ')P( x ')
Algorithm: x'
x : aux x| y P ( y | x) P ( x)
1
aux x| y
x
x : P ( x | y ) aux x| y
Conditioning
• Bayes rule and background knowledge:
P ( y | x, z ) P ( x | z )
P( x | y, z )
P( y | z )
?
P ( x y ) P ( x | y, z ) P ( z ) dz
?
P ( x | y, z ) P ( z | y ) dz
?
P ( x | y, z ) P ( y | z ) dz
Conditioning
• Bayes rule and background knowledge:
P ( y | x, z ) P ( x | z )
P( x | y, z )
P( y | z )
P ( x y ) P ( x | y, z ) P ( z | y ) dz
Conditional Independence
P ( x, y z ) P ( x | z ) P ( y | z )
• Equivalent to
P ( x z ) P ( x | z , y )
and
P ( y z ) P ( y | z , x)
Simple Example of State
Estimation
• Suppose our robot obtains another
observation z2.
• What is P(open|z1, z2)?
Recursive Bayesian Updating
P ( zn | x, z1, , zn 1) P ( x | z1, , zn 1)
P ( x | z1, , zn)
P ( zn | z1, , zn 1)
P ( z 2 | open) P (open | z1 )
P (open | z 2 , z1 )
P ( z 2 | open) P (open | z1 ) P ( z 2 | open) P ( open | z1 )
1 2
2 3 5
0.625
1 2 3 1 8
2 3 5 3
Bayes Filters
u = action
x = state
Bel ( xt ) P ( xt | u1 , z1 , ut , zt )
Bayes P ( zt | xt , u1 , z1 , , ut ) P ( xt | u1 , z1 , , ut )
Markov P ( zt | xt ) P ( xt | u1 , z1 , , ut )
Total prob.
Markov P( zt | xt ) P ( xt | ut , xt 1 ) P( xt 1 | u1 , z1 , , ut ) dxt 1
• Kalman filters
• Particle filters
• Hidden Markov models
• Dynamic Bayesian networks
• Partially Observable Markov Decision
Processes (POMDPs)
Summary
• Bayes rule allows us to compute probabilities
that are hard to assess otherwise.
• Under the Markov assumption, recursive
Bayesian updating can be used to efficiently
combine evidence.
• Bayes filters are a probabilistic tool for
estimating the state of dynamic systems.