0% found this document useful (0 votes)
21 views

Lecture1 2023

The document discusses robot localization, which is the process of estimating a robot's pose (position and orientation) relative to a global coordinate frame using sensor measurements and a motion model. The basic process involves using a motion model to predict the robot's new pose based on its previous pose and control input (prediction step), and then updating this predicted pose using a new sensor measurement based on the measurement model (update step). This process deals with uncertainty as there is noise in sensors, motion, and the environment. It uses probability and Bayes' rule to estimate the robot's pose given all past sensor measurements and control inputs.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Lecture1 2023

The document discusses robot localization, which is the process of estimating a robot's pose (position and orientation) relative to a global coordinate frame using sensor measurements and a motion model. The basic process involves using a motion model to predict the robot's new pose based on its previous pose and control input (prediction step), and then updating this predicted pose using a new sensor measurement based on the measurement model (update step). This process deals with uncertainty as there is noise in sensors, motion, and the environment. It uses probability and Bayes' rule to estimate the robot's pose given all past sensor measurements and control inputs.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Autonomy for Mobile Robots

The fundamental autonomy problem


we will be working on in this course:

A robot determining where it is


(i.e. robot localization)

1
Robot localization
Estimating a robot’s pose, i.e. its
location (or position) and
orientation relative to a global
coordinate frame.

2
Robot localization
Robot localization is a form of state
estimation, and initially we take the
state to mean the pose.
If the robot is confined to a surface, the
state vector is denoted
 x
xt =  y 
 
 
 
3
Robot localization
We use the symbol xt to denote the
robot’s pose at time t relative to a
global coordinate frame, more
specifically a map.

• For the vast majority of work in this


course, we will assume the map is
already known (given).
• We will deal with time in discrete
steps.
4
5
6
Basic robot localization process
Where (on
the map)
Measurement am I? Motion model
(sensor) (prediction
model step)
(update step)
Where (on
the map)
am I?

7
Robot localization example 1

8
Basic robot localization process
Where (on
the map)
Measurement am I? Motion model
(sensor) (prediction
model step)
(update step)
Where (on
the map)
am I?

9
Uncertainty in Localization
Motion model: robot may slip, error in
heading can affect new position

Measurement model: range and


resolution limitations, noise, faults

Environment/map: people can get in


the way, a new door can be installed,
etc.
10
Basic robot localization process
Where (on
the map)
Measurement am I? Motion model
(sensor) (prediction
model step)
(update step)
Where (on
the map)
am I?

Every aspect involves uncertainty…


we need to use our knowledge of
probabilities!
11
Axioms of Probability Theory
Pr(A) denotes probability that proposition A is true.

• 0  Pr( A)  1

• Pr(True) = 1 Pr(False) = 0

• Pr( A  B) = Pr( A) + Pr(B) − Pr( A  B)

12
A Closer Look at Axiom 3

Pr( A  B) = Pr( A) + Pr(B) − Pr( A  B)

True
A A B B

13
Discrete Random Variables

• X denotes a random variable.


• X is discrete if it can take on a countable
number of values, for example
X = {heads, tails}.
• P(X=x), or P(x), is the probability that the
random variable X takes on the particular
value x, and is called a probability mass
function.
• Total probability:  P( x) = 1
x
14
Continuous Random Variables

• X takes on values in the continuum.


• p(X=x), or p(x), is a probability density
function.
b
Pr( x  (a, b)) =  p ( x)dx
a
p(x)

• E.g.
 x

• Total probability  p( x) dx = 1
−
15
Random Variables for this course
Random variable Denoted Its particular value,
indexed at time t
Robot state X xt
Sensor measurement Z zt
Control input U ut

Where (on
the map)
Measurement am I? Motion model
(sensor) (prediction
model step)
(update step)
Where (on
the map)
am I? 16
Joint and Conditional Probability
• P(X=x and Y=y) = P(x,y)

• If X and Y are independent then


P(x,y) = P(x) P(y)
• P(x | y) is the probability of x given y
P(x | y) = P(x,y) / P(y)
P(x,y) = P(x | y) P(y)
• If X and Y are independent then
P(x | y) = P(x)
17
Total probabilities for conditionals

Discrete case Continuous case


 P( x y ) = 1
x
 p( x y) dx = 1
−

 P( y x) = 1
y
 p( y x) dy = 1
−

Note:  P( x y ) = 1
xy
in general

18
Conditional probabilities for this course
Measurement model: p(zt | xt)
Motion model: p(xt | xt-1 , ut )
Belief: bel(xt) = p(xt | z1:t , u1:t)

Where (on
the map)
Measurement am I? Motion model
(sensor) (prediction
model step)
(update step)
Where (on
the map)
am I? 19
Robot localization example 2

20
Robot localization example 2

21
22
Robot localization example 2

23
Robot localization example 2

24
More Probability review: Bayes Formula
P ( x, y ) = P ( x | y ) P ( y ) = P ( y | x ) P ( x )

P( y | x) P( x) likelihood  prior
P( x y) = =
P( y ) evidence
• Bayes Formula allows us to “invert”
probability problems, to find something
hard to estimate, p(x | z), from something
easier to directly estimate, p(z | x)
25
Bayes Formula
P ( x, y ) = P ( x | y ) P ( y ) = P ( y | x ) P ( x )

P( y | x) P( x) likelihood  prior
P( x y) = =
P( y ) evidence
• Conditional on additional variables:
P ( y | x, z ) P ( x z )
P( x y, z ) =
P( y z )
26
Marginal Probabilities

Discrete case

P ( x ) =  P ( x, y )
y

P( x) =  P( x | y ) P( y )
y

Continuous case

p ( x) =  p ( x, y ) dy

p( x) =  p( x | y ) p( y ) dy Image credit: Bscan

27
Conditioning on additional variables

Marginal probabilities, conditioned on an


additional variable (in this case, y)

P( x) =  P( x, z )dz

P( x) =  P( x | z ) P( z )dz

P( x y ) =  P( x | y, z ) P( z | y ) dz

28
Normalization
P( y | x) P( x)
P( x y ) = =  P( y | x) P( x)
P( y )
1
 = P( y ) =−1

 P( y | x)P( x)
x

(note we use expression for P(y) from marginal


probabilities)

The denominator in the Bayes Formula can be


thought of as just a normalization constant to
ensure 

 p( x y) dx = 1
−
29
Back to robot localization
An important assumption:
• Current measurement depends only on current state
• Current state can be fully known from prior state and
current control input
• Note: Our estimate of state depends on the measurement,
but not the state itself
Arrows show conditional dependencies; variables assumed
conditionally independent of all others not linked by arrow,
when already conditioned on the linked variable(s):

30
Markov Assumption

p( zt | x0:t , z1:t , u1:t ) = p( zt | xt )


p( xt | x1:t −1 , z1:t , u1:t ) = p( xt | xt −1 , ut )
Underlying Assumptions
• Static world
• Independent noise
• Perfect software approximation of the
probabilistic model
31
Conditional probabilities for this course
Measurement model: p(zt | xt)
Motion model: p(xt | xt-1 , ut )
Belief: bel(xt) = p(xt | z1:t , u1:t)

Where (on
the map)
Measurement am I? Motion model
(sensor) (prediction
model step)
(update step)
Where (on
the map)
am I? 32
Bayes Formula
P ( x, y ) = P ( x | y ) P ( y ) = P ( y | x ) P ( x )

P( y | x) P( x) likelihood  prior
P( x y) = =
P( y ) evidence
• Conditional on additional variables:
P ( y | x, z ) P ( x z )
P( x y, z ) =
P( y z )
33
Normalization
P( y | x) P( x)
P( x y ) = =  P( y | x) P( x)
P( y )
1
 = P( y ) =−1

 P( y | x)P( x)
x

(note we use expression for P(y) from marginal


probabilities)

The denominator in the Bayes Formula can be


thought of as just a normalization constant to
ensure 

 p( x y) dx = 1
−
34
Markov Assumption

p( zt | x0:t , z1:t , u1:t ) = p( zt | xt )


p( xt | x1:t −1 , z1:t , u1:t ) = p( xt | xt −1 , ut )

35
36
Conditional probabilities for this course
Measurement model: p(zt | xt)
Motion model: p(xt | xt-1 , ut )
Belief: Bel(xt) = p(xt | z1:t , u1:t)

Bel(xt)

p(zt | xt) Motion model


(prediction
step)

37
Conditioning on additional variables

• Marginal probabilities with conditioning:

P( x) =  P( x, z )dz

P( x) =  P( x | z ) P( z )dz

P( x y ) =  P( x | y, z ) P( z | y ) dz

38
Markov Assumption

p( zt | x0:t , z1:t , u1:t ) = p( zt | xt )


p( xt | x1:t −1 , z1:t , u1:t ) = p( xt | xt −1 , ut )

39
40
Conditional probabilities for this course
Measurement model: p(zt | xt)
Motion model: p(xt | xt-1 , ut )
Belief: Bel(xt) = p(xt | z1:t , u1:t)

Bel(xt) Update timestep

p(zt | xt)
p(xt | xt-1 , ut )

41
Bayes Filters
Bel ( xt ) =  P( zt | xt )  P( xt | ut , xt −1 ) Bel ( xt −1 ) dxt −1

Where (on
the map)
Measurement am I? Motion/action
/perception model
(sensor) (prediction
model step)
Where (on
(update step)
the map)
am I?

42
Bayes Filters
Bel ( xt ) =  P( zt | xt )  P( xt | ut , xt −1 ) Bel ( xt −1 ) dxt −1

 Prediction Update 
Bel ( xt −1 ) step Bel ( xt ) step Bel ( xt )

Note: We will also deal with sequences


where we do multiple prediction steps
or update steps in a row

43
Discrete Bayes Filter Algorithm
(sequence of perception and action doesn’t matter)

44

You might also like