0% found this document useful (0 votes)
17 views39 pages

Lecture 2 - Recursive State Estimation-P1

The document outlines the concepts of recursive state estimation in probabilistic robotics, focusing on the estimation of a robot's state from noisy sensor data. It covers key probabilistic concepts, Bayes' formula, and the integration of actions and observations to update beliefs about the robot's environment. The framework of Bayes filters is introduced for estimating the state of a dynamic system based on a stream of observations and actions.

Uploaded by

midoghpuneim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views39 pages

Lecture 2 - Recursive State Estimation-P1

The document outlines the concepts of recursive state estimation in probabilistic robotics, focusing on the estimation of a robot's state from noisy sensor data. It covers key probabilistic concepts, Bayes' formula, and the integration of actions and observations to update beliefs about the robot's environment. The framework of Bayes filters is introduced for estimating the state of a dynamic system based on a stream of observations and actions.

Uploaded by

midoghpuneim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

AIE455 ROBOT

MAPPING AND
LOCALIZATION
DR.SAFA A.ELASKARY

FALL 2024-2025
RECURSIVE STATE ESTIMATION
Lecture Outline
01 INTRODUCTION

02 BASIC PROBABILISTIC CONCEPTS AND


NOTATIONS

03 BAYE’S FORMULA

04 BAYE’S RULE
WITH BACKGROUND KNOWLEDGEN

05 SIMPLE EXAMPLE OF STATE ESTIMATION


Fall 2024-2025

3 01-INTRODUCTION

 At the heart of probabilistic robotics is the estimation of state from sensor data,
which involves inferring unobservable quantities.
 Knowing certain quantities, such as the robot's exact location and nearby obstacles,
simplifies decision-making in robotic applications.
 Robots rely on sensors to gather information, but these sensors provide only
partial data and are affected by noise, making direct measurement of state variables
impossible.
 The goal is to recover state variables from the noisy sensor data.
 Probabilistic state estimation algorithms compute belief distributions over possible
world states, allowing robots to make informed decisions despite uncertainty.
4
02. BASIC PROBABILISTIC CONCEPTS
PROBABILISTIC ROBOTICS
Key idea:
Explicit representation of uncertainty using the calculus of probability theory

• Perception‫ = اإلدراك‬state estimation ‫تقدير الحالة‬


• Action ‫ =اإلجراء‬utility optimization ‫تحسين األداة‬
‫المساعدة‬
02. BASIC PROBABILISTIC CONCEPTS

• A common density function is that of the one-dimensional


normal distribution with mean µ and variance σ 2

We will frequently abbreviate them as N (x; µ, σ 2 ), which


specifies the random variable, its mean, and its variance

5
Fall 2024-2025
6

02. BASIC PROBABILISTIC CONCEPTS


AXIOMS OF PROBABILITY THEORY
Pr (A) denotes probability that proposition A is true.
0  Pr( A)  1

Pr(True)  1 Pr( False)  0


• Pr( A  B)  Pr( A)  Pr( B)  Pr( A  B)


7
02. BASIC PROBABILISTIC CONCEPTS
A CLOSER LOOK AT AXIOM 3
Pr( A  B )  Pr( A)  Pr( B )  Pr( A  B )
True
A A B B

B
8
02. BASIC PROBABILISTIC CONCEPTS
USING THE AXIOMS

Pr( A  A)  Pr( A)  Pr(A)  Pr( A  A)


Pr(True)  Pr( A)  Pr(A)  Pr(False)
1  Pr( A)  Pr(A)  0
Pr(A)  1  Pr( A)
02. BASIC PROBABILISTIC CONCEPTS
9
DISCRETE RANDOM VARIABLES
• X denotes a random variable.

• X can take on a countable number of values in {x1, x2, …, xn}.

• P(X=xi), or P(xi), is the probability that the random variable X


takes on value xi.
.
• P( ) is called probability mass function.

• E.g. P( Room)  0.7,0.2,0.08,0.02


02. BASIC PROBABILISTIC CONCEPTS
10
CONTINUOUS RANDOM VARIABLES
• X takes on values in the continuum.

• p(X=x), or p(x), is a probability density function.


b
Pr( x  (a, b))   p ( x)dx
a
p(x)

• E.g.
x
02. BASIC PROBABILISTIC CONCEPTS
JOINT AND CONDITIONAL PROBABILITY
• P(X=x and Y=y) = P(x,y)

• If X and Y are independent then


P(x,y) = P(x) P(y)
• P(x | y) is the probability of x given y
P(x | y) = P(x,y) / P(y)
P(x,y) = P(x | y) P(y)
• If X and Y are independent then
P(x | y) = P(x)
11
02. BASIC PROBABILISTIC CONCEPTS
LAW OF TOTAL PROBABILITY, MARGINALS
Discrete case Continuous case

 P( x)  1
x
 p( x) dx  1
P ( x )   P ( x, y ) p( x)   p( x, y ) dy
y

P( x)   P( x | y ) P( y ) p( x)   p( x | y ) p( y ) dy
y
12
03 -BAYE’S FORMULA

13
14
03 -BAYE’S FORMULA

P ( x, y )  P ( x | y ) P ( y )  P ( y | x ) P ( x )

P( y | x) P( x)
P( x y ) 
P( y )
03 -BAYE’S FORMULA
15
NORMALIZATION
P( y | x) P( x)
P( x y )    P( y | x) P( x)
P( y )
1
  P( y ) 1
Algorithm:
 P( y | x)P( x)
x

x : aux x| y  P ( y | x) P ( x)
1

 aux x| y
x

x : P ( x | y )   aux x| y
16

04- BAYES RULE WITH BACKGROUND


KNOWLEDGE

P ( y | x, z ) P ( x | z )
P( x | y, z ) 
P( y | z )
17 CONDITIONAL INDEPENDENCE

P ( x, y z )  P ( x | z ) P ( y | z )

equivalent to
P( x z )  P( x | z, y )
and

P( y z )  P ( y | z , x)
05-SIMPLE EXAMPLE OF STATE ESTIMATION

• Suppose a robot obtains measurement z


• What is P(open|z)?

18
19
CAUSAL VS. DIAGNOSTIC REASONING
‫االستدالل السببي مقابل االستدالل التشخيصي‬

• P(open|z) is diagnostic.
• P(z|open) is causal. count frequencies!
• Often causal knowledge is easier to obtain.
• Bayes rule allows us to use causal knowledge:

P( z | open) P(open)
P(open | z ) 
P( z )
EXAMPLE 1
• P(z|open) = 0.6 P(z|open) = 0.3
• P(open) = P(open) = 0.5

P ( z | open) P (open)
P (open | z ) 
P ( z | open) p (open)  P ( z | open) p (open)
0 .6  0 .5 2
P (open | z )    0.67
0 .6  0 .5  0 .3  0 .5 3

• z raises the probability that the door is open.

20
21 COMBINING EVIDENCE

• Suppose our robot obtains another observation z2.

• How can we integrate this new information?

• More generally, how can we estimate


P(x| z1...zn )?
22 RECURSIVE BAYESIAN UPDATING

P( zn | x, z1,, zn  1) P( x | z1,, zn  1)
P( x | z1,, zn) 
P( zn | z1,, zn  1)
Markov assumption: zn is independent of z1,...,zn-1 if we know
x.
P( zn | x) P( x | z1,  , zn  1)
P( x | z1,  , zn) 
P ( zn | z1,  , zn  1)
  P ( zn | x) P ( x | z1,  , zn  1)
 1...n  P ( z | x) P( x)
i 1...n
i
23 EXAMPLE 2: SECOND MEASUREMENT
• P(z2|open) = 0.5 P(z2|open) = 0.6
• P(open|z1)=2/3

P ( z 2 | open) P (open | z1 )
P (open | z 2 , z1 ) 
P ( z 2 | open) P (open | z1 )  P ( z 2 | open) P (open | z1 )
1 2

2 3 5
   0.625
1 2 3 1 8
  
2 3 5 3
• z2 lowers the probability that the door is open.
Fall 2024-2025

24

• To be continued
25 A TYPICAL PITFALL

• Two possible locations x1 and x2


• P(x1)=0.99
• P(z|x2)=0.09 P(z|x1)=0.07
1
p(x2 | d)
0.9 p(x1 | d)

0.8

0.7

0.6
p( x | d)

0.5

0.4

0.3

0.2

0.1

0
5 10 15 20 25 30 35 40 45 50
Number of integrations
26 ACTIONS

• Often the world is dynamic since


• actions carried out by the robot,
• actions carried out by other agents,
• or just the time passing by

change the world.

• How can we incorporate such actions?


27 TYPICAL ACTIONS

• The robot turns its wheels to move


• The robot uses its manipulator to grasp an object
• Plants grow over time…

• Actions are never carried out with absolute


certainty.
• In contrast to measurements, actions generally
increase the uncertainty.
28 MODELING ACTIONS

• To incorporate the outcome of an action u into the


current “belief”, we use the conditional pdf

P(x|u,x’)

• This term specifies the pdf that executing u


changes the state from x’ to x.
29 EXAMPLE: CLOSING THE DOOR
30 STATE TRANSITIONS

P(x|u,x’) for u = “close door”:

0.9
0.1 open closed 1
0
If the door is open, the action “close door” succeeds in 90%
of all cases.
INTEGRATING THE OUTCOME OF ACTIONS
31
Continuous case:

P( x | u )   P( x | u, x' ) P( x' )dx'


Discrete case:

P( x | u )   P( x | u, x' ) P( x' )
EXAMPLE: THE RESULTING BELIEF
P (closed | u )   P (closed | u , x' ) P ( x' )
 P (closed | u , open) P (open)
 P (closed | u , closed ) P (closed )
9 5 1 3 15
    
10 8 1 8 16
P (open | u )   P (open | u , x ' ) P ( x ' )
 P (open | u , open) P (open)
 P (open | u , closed ) P (closed )
1 5 0 3 1
    
10 8 1 8 16
 1  P (closed | u )
32
33 BAYES FILTERS: FRAMEWORK

• Given: d t  {u1 , z1  , ut , zt }
• Stream of observations z and action data u:

• Sensor model P(z|x).


• Action model P(x|u,x’).
• Prior probability of the system state P(x).
• Wanted:
• Estimate of the state X of a dynamical system.
• The posterior of the state is also called Belief:
Bel ( xt )  P( xt | u1 , z1 , ut , zt )
MARKOV ASSUMPTION
34

p( zt | x0:t , z1:t , u1:t )  p( zt | xt )


p( xt | x1:t 1 , z1:t , u1:t )  p( xt | xt 1 , ut )
Underlying Assumptions
• Static world
• Independent noise
• Perfect model, no approximation errors
z = observation

BAYES FILTERS u = action


x = state

35
Bel ( xt )  P( xt | u1 , z1 , ut , zt )
Bayes   P ( zt | xt , u1 , z1 , , ut ) P( xt | u1 , z1 , , ut )
Markov   P( zt | xt ) P( xt | u1 , z1 , , ut )
Total prob.   P ( zt | xt )  P ( xt | u1 , z1 , , ut , xt 1 )
P ( xt 1 | u1 , z1 , , ut ) dxt 1
Markov   P( zt | xt )  P( xt | ut , xt 1 ) P( xt 1 | u1 , z1 , , ut ) dxt 1
Markov   P( zt | xt )  P( xt | ut , xt 1 ) P( xt 1 | u1 , z1 , , zt 1 ) dxt 1

  P( zt | xt )  P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1
BAYES
Bel ( xt ) FILTER
  P( zt ALGORITHM
| xt )  P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1

1. Algorithm Bayes_filter( Bel(x),d ):


2. 0
3. If d is a perceptual data item z then
4. For all x do
5. Bel ' ( x)  P( z | x) Bel ( x)
6.     Bel ' ( x)
7. For all x do
8. Bel ' ( x)   1 Bel ' ( x)
9. Else if d is an action data item u then
10. For all x do
11. Bel ' ( x)   P( x | u, x' ) Bel ( x' ) dx'
12. Return Bel’(x)
36
37 BAYES FILTERS ARE FAMILIAR!
Bel ( xt )   P( zt | xt )  P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1

• Kalman filters
• Particle filters
• Hidden Markov models
• Dynamic Bayesian networks
• Partially Observable Markov Decision Processes
(POMDPs)
38 SUMMARY

• Bayes rule allows us to compute probabilities that are hard to assess


otherwise.
• Under the Markov assumption, recursive Bayesian updating can be used to
efficiently combine evidence.
• Bayes filters are a probabilistic tool for estimating the state of dynamic
systems.
Fall 2024-2025

39

• Thank you

You might also like