Lecture 2 - Recursive State Estimation-P1
Lecture 2 - Recursive State Estimation-P1
MAPPING AND
LOCALIZATION
DR.SAFA A.ELASKARY
FALL 2024-2025
RECURSIVE STATE ESTIMATION
Lecture Outline
01 INTRODUCTION
03 BAYE’S FORMULA
04 BAYE’S RULE
WITH BACKGROUND KNOWLEDGEN
3 01-INTRODUCTION
At the heart of probabilistic robotics is the estimation of state from sensor data,
which involves inferring unobservable quantities.
Knowing certain quantities, such as the robot's exact location and nearby obstacles,
simplifies decision-making in robotic applications.
Robots rely on sensors to gather information, but these sensors provide only
partial data and are affected by noise, making direct measurement of state variables
impossible.
The goal is to recover state variables from the noisy sensor data.
Probabilistic state estimation algorithms compute belief distributions over possible
world states, allowing robots to make informed decisions despite uncertainty.
4
02. BASIC PROBABILISTIC CONCEPTS
PROBABILISTIC ROBOTICS
Key idea:
Explicit representation of uncertainty using the calculus of probability theory
5
Fall 2024-2025
6
B
8
02. BASIC PROBABILISTIC CONCEPTS
USING THE AXIOMS
• E.g.
x
02. BASIC PROBABILISTIC CONCEPTS
JOINT AND CONDITIONAL PROBABILITY
• P(X=x and Y=y) = P(x,y)
P( x) 1
x
p( x) dx 1
P ( x ) P ( x, y ) p( x) p( x, y ) dy
y
P( x) P( x | y ) P( y ) p( x) p( x | y ) p( y ) dy
y
12
03 -BAYE’S FORMULA
13
14
03 -BAYE’S FORMULA
P ( x, y ) P ( x | y ) P ( y ) P ( y | x ) P ( x )
P( y | x) P( x)
P( x y )
P( y )
03 -BAYE’S FORMULA
15
NORMALIZATION
P( y | x) P( x)
P( x y ) P( y | x) P( x)
P( y )
1
P( y ) 1
Algorithm:
P( y | x)P( x)
x
x : aux x| y P ( y | x) P ( x)
1
aux x| y
x
x : P ( x | y ) aux x| y
16
P ( y | x, z ) P ( x | z )
P( x | y, z )
P( y | z )
17 CONDITIONAL INDEPENDENCE
P ( x, y z ) P ( x | z ) P ( y | z )
equivalent to
P( x z ) P( x | z, y )
and
P( y z ) P ( y | z , x)
05-SIMPLE EXAMPLE OF STATE ESTIMATION
18
19
CAUSAL VS. DIAGNOSTIC REASONING
االستدالل السببي مقابل االستدالل التشخيصي
• P(open|z) is diagnostic.
• P(z|open) is causal. count frequencies!
• Often causal knowledge is easier to obtain.
• Bayes rule allows us to use causal knowledge:
P( z | open) P(open)
P(open | z )
P( z )
EXAMPLE 1
• P(z|open) = 0.6 P(z|open) = 0.3
• P(open) = P(open) = 0.5
P ( z | open) P (open)
P (open | z )
P ( z | open) p (open) P ( z | open) p (open)
0 .6 0 .5 2
P (open | z ) 0.67
0 .6 0 .5 0 .3 0 .5 3
20
21 COMBINING EVIDENCE
P( zn | x, z1,, zn 1) P( x | z1,, zn 1)
P( x | z1,, zn)
P( zn | z1,, zn 1)
Markov assumption: zn is independent of z1,...,zn-1 if we know
x.
P( zn | x) P( x | z1, , zn 1)
P( x | z1, , zn)
P ( zn | z1, , zn 1)
P ( zn | x) P ( x | z1, , zn 1)
1...n P ( z | x) P( x)
i 1...n
i
23 EXAMPLE 2: SECOND MEASUREMENT
• P(z2|open) = 0.5 P(z2|open) = 0.6
• P(open|z1)=2/3
P ( z 2 | open) P (open | z1 )
P (open | z 2 , z1 )
P ( z 2 | open) P (open | z1 ) P ( z 2 | open) P (open | z1 )
1 2
2 3 5
0.625
1 2 3 1 8
2 3 5 3
• z2 lowers the probability that the door is open.
Fall 2024-2025
24
• To be continued
25 A TYPICAL PITFALL
0.8
0.7
0.6
p( x | d)
0.5
0.4
0.3
0.2
0.1
0
5 10 15 20 25 30 35 40 45 50
Number of integrations
26 ACTIONS
P(x|u,x’)
0.9
0.1 open closed 1
0
If the door is open, the action “close door” succeeds in 90%
of all cases.
INTEGRATING THE OUTCOME OF ACTIONS
31
Continuous case:
P( x | u ) P( x | u, x' ) P( x' )
EXAMPLE: THE RESULTING BELIEF
P (closed | u ) P (closed | u , x' ) P ( x' )
P (closed | u , open) P (open)
P (closed | u , closed ) P (closed )
9 5 1 3 15
10 8 1 8 16
P (open | u ) P (open | u , x ' ) P ( x ' )
P (open | u , open) P (open)
P (open | u , closed ) P (closed )
1 5 0 3 1
10 8 1 8 16
1 P (closed | u )
32
33 BAYES FILTERS: FRAMEWORK
• Given: d t {u1 , z1 , ut , zt }
• Stream of observations z and action data u:
35
Bel ( xt ) P( xt | u1 , z1 , ut , zt )
Bayes P ( zt | xt , u1 , z1 , , ut ) P( xt | u1 , z1 , , ut )
Markov P( zt | xt ) P( xt | u1 , z1 , , ut )
Total prob. P ( zt | xt ) P ( xt | u1 , z1 , , ut , xt 1 )
P ( xt 1 | u1 , z1 , , ut ) dxt 1
Markov P( zt | xt ) P( xt | ut , xt 1 ) P( xt 1 | u1 , z1 , , ut ) dxt 1
Markov P( zt | xt ) P( xt | ut , xt 1 ) P( xt 1 | u1 , z1 , , zt 1 ) dxt 1
P( zt | xt ) P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1
BAYES
Bel ( xt ) FILTER
P( zt ALGORITHM
| xt ) P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1
• Kalman filters
• Particle filters
• Hidden Markov models
• Dynamic Bayesian networks
• Partially Observable Markov Decision Processes
(POMDPs)
38 SUMMARY
39
• Thank you