State Estimation
Probability, Bayes Filtering
Arunkumar Byravan
CSE 490R – Lecture 3
Interaction loop
• Sense: Receive sensor
data and estimate Sens
“state” Plan
e
• Plan: Generate long-
term plans based on
state & goal
Act
• Act: Apply actions to
the robot
State Byron Boots – Statistical Robotics at GaTech
Markovian assumption: Future is independent of past
State Estimation
• Estimate “state” using sensor data & actions:
• Robot position and orientation
• Environment map
• Location of people, other robots etc.
• Indoor localization for mobile robots (given
map)
• Fuse information from different sensors (LIDAR, Wheel
encoders)
• Probabilistic filtering
Why a probabilistic approach?
Explicitly represent uncertainty using the
calculus of probability theory
Discrete Random Variables
• X denotes a random variable.
• X can take on a countable number of values in {x1, x2,
…, xn}.
• p(X = xi), or p(xi), is the probability that the random
variable
. X takes on value xi.
• p( ) is called probability mass function.
• E.g. P( Room) 0.7,0.2,0.08,0.02
Continuous Random Variables
• X takes on values in the continuum.
• p(X = x), or p(x) is the probability density function.
• E.g.
Gaussian PDF
Joint and Conditional Probability
• P(X = x and Y = y) = P(x, y)
• If X and Y are independent then
P(x, y) = P(x) P(y)
• P(x | y) is the probability of x given y
P(x | y) = P(x , y) / P(y)
P(x , y) = P(x | y) P(y)
• If X and Y are independent then
P(x | y) = P(x)
Law of Total Probability, Marginals
Discrete case Continuous case
P( x) 1
x
p( x) dx 1
P ( x) P ( x, y ) p ( x) p ( x, y ) dy
y
P( x) P( x | y ) P( y ) p ( x) p ( x | y ) p( y ) dy
y
Events
• P(+x, +y) ?
X Y P
• P(+x) ? +x +y 0.2
+x -y 0.3
• P(-y OR +x) ? -x +y 0.4
-x -y 0.1
• Independent?
Marginal Distributions
X P
X Y P +x
-x
+x +y 0.2
+x -y 0.3
-x +y 0.4 Y P
-x -y 0.1 +y
-y
Conditional Probabilities
• P(+x | +y) ?
X Y P
+x +y 0.2
+x -y 0.3 • P(-x | +y) ?
-x +y 0.4
-x -y 0.1
• P(-y | +x) ?
Bayes Formula
P ( x, y ) P ( x | y ) P ( y ) P ( y | x ) P ( x )
P ( y | x) P ( x) likelihood prior
P( x y )
P( y ) evidence
• Often causal knowledge is easier to obtain
than diagnostic knowledge.
• Bayes rule allows us to use causal
knowledge.
Simple Example of State
Estimation
• Suppose a robot obtains measurement z
• What is P(open|z)?
Example
P ( z | open) P (open)
P (open | z )
P ( z | open) p (open) P ( z | open) p ( open)
0.6 0.5 2
P (open | z ) 0.67
0.6 0.5 0.3 0.5 3
• z raises the probability that the door is open.
Normalization
P( y | x) P( x)
P( x y ) P ( y | x) P ( x)
P( y )
1
P ( y ) 1
P( y | x ')P( x ')
Algorithm: x'
x : aux x| y P ( y | x) P ( x)
1
aux x| y
x
x : P ( x | y ) aux x| y
Conditioning
• Bayes rule and background knowledge:
P ( y | x, z ) P ( x | z )
P( x | y, z )
P( y | z )
?
P ( x y ) P ( x | y, z ) P ( z ) dz
?
P ( x | y, z ) P ( z | y ) dz
?
P ( x | y, z ) P ( y | z ) dz
Conditioning
• Bayes rule and background knowledge:
P ( y | x, z ) P ( x | z )
P( x | y, z )
P( y | z )
P ( x y ) P ( x | y, z ) P ( z | y ) dz
Conditional Independence
P ( x, y z ) P ( x | z ) P ( y | z )
• Equivalent to
P ( x z ) P ( x | z , y )
and
P ( y z ) P ( y | z , x)
Simple Example of State
Estimation
• Suppose our robot obtains another
observation z2.
• What is P(open|z1, z2)?
Recursive Bayesian Updating
P ( zn | x, z1, , zn 1) P ( x | z1, , zn 1)
P ( x | z1, , zn)
P ( zn | z1, , zn 1)
Markov assumption: zn is conditionally independent of
z1,...,zn-1 given x.
Example: Second Measurement
P ( z 2 | open) P (open | z1 )
P (open | z 2 , z1 )
P ( z 2 | open) P (open | z1 ) P ( z 2 | open) P ( open | z1 )
1 2
2 3 5
0.625
1 2 3 1 8
2 3 5 3
• z2 lowers the probability that the door is open.
Bayes Filters: Framework
• Given:
• Stream of observations z and action data u:
d t {u1 , z 2 , ut 1 , zt }
• Sensor model P(z|x).
• Action model P(x|u,x’).
• Prior probability of the system state P(x).
• Wanted:
• Estimate of the state X of a dynamical system.
• The posterior of the state is also called Belief:
Bel ( xt ) P ( xt | u1 , z 2 , ut 1 , zt )
z = observation
Bayes Filters
u = action
x = state
Bel ( xt ) P ( xt | u1 , z1 , ut , zt )
Bayes P ( zt | xt , u1 , z1 , , ut ) P ( xt | u1 , z1 , , ut )
Markov P ( zt | xt ) P ( xt | u1 , z1 , , ut )
Total prob.
Markov P( zt | xt ) P ( xt | ut , xt 1 ) P( xt 1 | u1 , z1 , , ut ) dxt 1
P( zt | xt ) P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1
Bayes
Bel ( x ) Filter
t t Algorithm
P( z | x ) P( x | u , x ) Bel ( x
t t t t 1 t 1 ) dxt 1
1. Algorithm Bayes_filter ( Bel(x),d ):
2. n=0
3. If d is a perceptual data item z then
4. For all x do
5. Bel ' ( x) P ( z | x) Bel ( x)
6. Bel ' ( x)
7. For all x do
8. Bel ' ( x) 1 Bel ' ( x)
9. Else if d is an action data item u then
10. For all x do
11.
12. Return Bel’(x)
Markov Assumption
p( zt | x0:t , z1:t 1 , u1:t ) p( zt | xt )
p( xt | x1:t 1 , z1:t 1 , u1:t ) p( xt | xt 1 , ut )
Underlying Assumptions
• Static world
• Independent noise
• Perfect model, no approximation errors
Bayes Filters for
Robot Localization
Bayes Filters are Familiar!
Bel ( xt ) P( zt | xt ) P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1
• Kalman filters
• Particle filters
• Hidden Markov models
• Dynamic Bayesian networks
• Partially Observable Markov Decision
Processes (POMDPs)
Summary
• Bayes rule allows us to compute probabilities
that are hard to assess otherwise.
• Under the Markov assumption, recursive
Bayesian updating can be used to efficiently
combine evidence.
• Bayes filters are a probabilistic tool for
estimating the state of dynamic systems.