0% found this document useful (0 votes)
4 views97 pages

Slides 5

Chapter 5 of 'Autonomous Mobile Robots' focuses on localization and map building for autonomous robots, discussing various techniques such as odometric position estimation, belief representation, and probabilistic map-based localization. It highlights challenges like sensor noise and aliasing, and the importance of accurate environment modeling for effective navigation. The chapter also outlines the processes involved in updating position estimates using observations from the environment.

Uploaded by

quangduc31122003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views97 pages

Slides 5

Chapter 5 of 'Autonomous Mobile Robots' focuses on localization and map building for autonomous robots, discussing various techniques such as odometric position estimation, belief representation, and probabilistic map-based localization. It highlights challenges like sensor noise and aliasing, and the importance of accurate environment modeling for effective navigation. The chapter also outlines the processes involved in updating position estimates using observations from the environment.

Uploaded by

quangduc31122003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 97

Autonomous Mobile Robots, Chapter 5 5

Localization and Map Building


• Noise and aliasing; odometric position estimation
• To localize or not to localize
• Belief representation
• Map representation
• Probabilistic map-based localization
• Other examples of localization system
• Autonomous map building

Localization "Position" Cognition


Global Map

Environment Model Path


Local Map

Perception Real World Motion Control


Environment

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.1
Localization, Where am I?
?
position
Position Update
(Estimation?)

Encoder Prediction of
matched
Position
(e.g. odometry) observations

YES

Map predicted position


data base Matching

• Odometry, Dead Reckoning


raw sensor data or
• Localization base on external sensors, extracted features
beacons or landmarks

Perception
n
oi
pt Observation
• Probabilistic Map Based Localization e
cr
e © R. Siegwart, I. Nourbakhsh
P
Autonomous Mobile Robots, Chapter 5 5.2
Challenges of Localization
• Knowing the absolute position (e.g. GPS) is not sufficient

• Localization in human-scale in relation with environment

• Planing in the Cognition step requires more than only position as input

• Perception and motion plays an important role


Ø Sensor noise
Ø Sensor aliasing
Ø Effector noise
Ø Odometric position estimation

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.2.1
Sensor Noise

• Sensor noise in mainly influenced by environment


e.g. surface, illumination …

• or by the measurement principle itself


e.g. interference between ultrasonic sensors

• Sensor noise drastically reduces the useful information of sensor


readings. The solution is:
Ø to take multiple reading into account
Ø employ temporal and/or multi-sensor fusion

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.2.2
Sensor Aliasing
• In robots, non-uniqueness of sensors readings is the norm

• Even with multiple sensors, there is a many-to-one mapping from


environmental states to robot’s perceptual inputs

• Therefore the amount of information perceived by the sensors is


generally insufficient to identify the robot’s position from a single
reading
Ø Robot’s localization is usually based on a series of readings
Ø Sufficient information is recovered by the robot over time

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.2.3
Effector Noise: Odometry, Dead Reckoning

• Odometry and dead reckoning:


Position update is based on proprioceptive sensors
Ø Odometry: wheel sensors only
Ø Dead reckoning: also heading sensors
• The movement of the robot, sensed with wheel encoders and/or
heading sensors is integrated to the position.
Ø Pros: Straight forward, easy
Ø Cons: Errors are integrated -> unbound
• Using additional heading sensors (e.g. gyroscope) might help to reduce
the cumulated errors, but the main problems remain the same.

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.2.3
Odometry: Error sources
deterministic non-deterministic
(systematic) (non-systematic)

Ø deterministic errors can be eliminated by proper calibration of the system.


Ø non-deterministic errors have to be described by error models and wi ll always
leading to uncertain position estimate.
• Major Error Sources:
Ø Limited resolution during integration (time increments, measurement resolution
…)
Ø Misalignment of the wheels (deterministic)
Ø Unequal wheel diameter (deterministic)
Ø Variation in the contact point of the wheel
Ø Unequal floor contact (slipping, not planar …)
Ø…

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.2.3

Odometry: Classification of Integration Errors


• Range error: integrated path length (distance) of the robots movement
Ø sum of the wheel movements
• Turn error: similar to range error, but for turns
Ø difference of the wheel motions
• Drift error: difference in the error of the wheels leads to an error in the
robots angular orientation
Over long periods of time, turn and drift errors
far outweigh range errors!
Ø Consider moving forward on a straight line along the x axis. The error
in the y-position introduced by a move of d meters will have a component
of dsin∆θ, which can be quite large as the angular error ∆θ grows.

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.2.4
Odometry: The Differential Drive Robot (1)

 x  ∆x 
p =  y p′ = p + ∆y 
 
 θ  ∆θ

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.2.4
Odometry: The Differential Drive Robot (2)
• Kinematics

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.2.4
Odometry: The Differential Drive Robot (3)
• Error model

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.2.4
Odometry: Growth of Pose uncertainty for Straight Line Movement
• Note: Errors perpendicular to the direction of movement are grow ing much faster!

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.2.4
Odometry: Growth of Pose uncertainty for Movement on a Circle
• Note: Errors ellipse in does not remain perpendicular to the direction of movement!

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.2.4
Odometry: Calibration of Errors I (Borenstein [5])
• The unidirectional square path experiment

• BILD 1 Borenstein

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.2.4
Odometry: Calibration of Errors II (Borenstein [5])
• The bi-directional square path experiment

• BILD 2/3 Borenstein

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.2.4
Odometry: Calibration of Errors III (Borenstein [5])
• The deterministic and
non-deterministic errors

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.3
To localize or not?
• How to navigate between A and B
Ø navigation without hitting obstacles
Ø detection of goal location
• Possible by following always the left wall
Ø However, how to detect that the goal is reached

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.3
Behavior Based Navigation

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.3
Model Based Navigation

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.4
Belief Representation
• a) Continuous map
with single hypothesis

• b) Continuous map
with multiple hypothesis

• d) Discretized map
with probability distribution

• d) Discretized topological
map with probability
distribution

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.4
Belief Representation: Characteristics

• Continuous • Discrete
Ø Precision bound by sensor Ø Precision bound by
data resolution of discretisation
Ø Typically single hypothesis Ø Typically multiple hypothesis
pose estimate pose estimate
Ø Lost when diverging (for Ø Never lost (when diverges
single hypothesis) converges to another cell)
Ø Compact representation and Ø Important memory and
typically reasonable in processing power needed.
processing power. (not the case for topological
maps)

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.4
Bayesian Approach: A taxonomy of probabilistic models
More general Courtesy of Julien Diard
Bayesian
Programs
S: State
Bayesian O: Observation
Networks A: Action

DBNs
St St-1 St St-1 At

Markov Bayesian
Markov Loc MDPs
Chains Filters

Particle discrete semi-cont. continuous


MCML POMDPs
Filters HMMs HMMs HMMs

Kalman
St St-1 Ot Filters St St-1 Ot At
More specific © R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.4.1
Single-hypothesis Belief – Continuous Line-Map

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.4.1
Single-hypothesis Belief – Grid and Topological Map

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.4.2
Grid-base Representation - Multi Hypothesis
• Grid size around 20 cm2.

Courtesy of W. Burgard

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.5
Map Representation
1. Map precision vs. application

2. Features precision vs. map precision

3. Precision vs. computational complexity

• Continuous Representation

• Decomposition (Discretization)

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.5
Representation of the Environment
• Environment Representation
Ø Continuos Metric → x,y,θ
Ø Discrete Metric → metric grid
Ø Discrete Topological → topological grid
• Environment Modeling
Ø Raw sensor data, e.g. laser range data, grayscale images
o large volume of data, low distinctiveness on the level of individual values
o makes use of all acquired information
Ø Low level features, e.g. line other geometric features
o medium volume of data, average distinctiveness
o filters out the useful information, still ambiguities
Ø High level features, e.g. doors, a car, the Eiffel tower
o low volume of data, high distinctiveness
o filters out the useful information, few/no ambiguities, not enough information

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.5.1
Map Representation: Continuous Line-Based
a) Architecture map
b) Representation with set of infinite lines

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.5.2
Map Representation: Decomposition (1)
• Exact cell decomposition

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.5.2
Map Representation: Decomposition (2)
• Fixed cell decomposition
Ø Narrow passages disappear

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.5.2
Map Representation: Decomposition (3)
• Adaptive cell decomposition

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.5.2
Map Representation: Decomposition (4)
• Fixed cell decomposition – Example with very small cells

Courtesy of S. Thrun
© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.5.2

Map Representation: Decomposition (5)

• Topological Decomposition

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.5.2
Map Representation: Decomposition (6)
• Topological Decomposition

node

Connectivity
(arch)

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.5.2
Map Representation: Decomposition (7)
• Topological Decomposition
~ 400 m

~ 1 km

~ 200 m

~ 50 m

~ 10 m

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.5.3
State-of-the-Art: Current Challenges in Map Representation

• Real world is dynamic

• Perception is still a major challenge


Ø Error prone
Ø Extraction of useful information difficult

• Traversal of open space

• How to build up topology (boundaries of nodes)

• Sensor fusion

•…
© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.1
Probabilistic, Map-Based Localization (1)
• Consider a mobile robot moving in a known environment.

• As it start to move, say from a precisely known location, it might keep


track of its location using odometry.
• However, after a certain movement the robot will get very uncertain
about its position.
è update using an observation of its environment.

• observation lead also to an estimate of the robots position which can


than be fused with the odometric estimation to get the best possible
update of the robots actual position.

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.1
Probabilistic, Map-Based Localization (2)
• Action update
Ø action model ACT

with ot: Encoder Measurement, st-1: prior belief state


Ø increases uncertainty
• Perception update
Ø perception model SEE

with it: exteroceptive sensor inputs, s’1: updated belief state


Ø decreases uncertainty
© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.1

• Improving belief state


by moving

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.1
Probabilistic, Map-Based Localization (3)
• Given
Ø the position estimate p(k k )
Ø its covariance Σ p ( k k ) for time k,
Ø the current control input u (k )
Ø the current set of observations Z ( k + 1) and
Ø the map M (k )

• Compute the
Ø new (posteriori) position estimate p ( k + 1 k + 1) and
Ø its covariance Σ p ( k + 1 k + 1)

• Such a procedure usually involves five steps:

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.1
The Five Steps for Map-Based Localization
Prediction of position Estimation
Encoder Measurement and estimate (fusion)
Position (odometry)

predicted feature
matched predictions

observations
Map and observations
data base
YES

Matching

1. Prediction based on previous estimate and odometry


2. Observation with on-board sensors raw sensor data or
3. Measurement prediction based on prediction and map extracted features

Perception
4. Matching of observation and map Observation
on-board sensors
5. Estimation -> position update (posteriori position)

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.1
Markov ó Kalman Filter Localization

• Markov localization • Kalman filter localization


Ø localization starting from any Ø tracks the robot and is inherently
unknown position very precise and efficient.
Ø recovers from ambiguous Ø However, if the uncertainty of the
situation. robot becomes to large (e.g.
Ø However, to update the probability collision with an object) the
of all positions within the whole Kalman filter will fail and the
state space at any time requires a position is definitively lost.
discrete representation of the
space (grid). The required memory
and calculation power can thus
become very important if a fine
grid is used.

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization (1)
• Markov localization uses an
explicit, discrete representation for the probability of
all position in the state space.

• This is usually done by representing the environment by a grid or a


topological graph with a finite number of possible states (positions).

• During each update, the probability for each state (element) of the
entire space is updated.

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization (2): Applying probabilty theory to robot localization
• P(A): Probability that A is true.
Ø e.g. p(rt = l): probability that the robot r is at position l at time t

• We wish to compute the probability of each indivitual robot position


given actions and sensor measures.
• P(A|B): Conditional probability of A given that we know B.
Ø e.g. p(rt = l| it): probability that the robot is at position l given the
sensors input it.
• Product rule:

• Bayes rule:

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization (3): Applying probability theory to robot localization
• Bayes rule:

Ø Map from a belief state and a sensor input to a refined belief state (SEE):

Ø p(l): belief state before perceptual update process


Ø p(i |l): probability to get measurement i when being at position l
o consult robots map, identify the probability of a certain sensor reading for each
possible position in the map
Ø p(i): normalization factor so that sum over all l for L equals 1.

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization (4): Applying probability theory to robot localization
• Bayes rule:

Ø Map from a belief state and a action to new belief state (ACT):

Ø Summing over all possible ways in which the robot may have reach ed l.

• Markov assumption: Update only depends on previous state and its


most recent actions and perception.

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 1 - Topological Map (1)
• The Dervish Robot
• Topological Localization with Sonar

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 1 - Topological Map (2)
• Topological map of office-type environment

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 1 - Topological Map (3)
• Update of believe state for position n given the percept-pair i

Ø p(n¦i): new likelihood for being in position n


Ø p(n): current believe state
Ø p(i¦n): probability of seeing i in n (see table)
• No action update !
Ø However, the robot is moving and therefore we can apply a combin ation
of action and perception update

Ø t-i is used instead of t-1 because the topological distance between n’ and
n can very depending on the specific topological map

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 1 - Topological Map (4)
• The calculation

is calculated by multiplying the probability of generating perce ptual


event i at position n by the probability of having failed to generate
perceptual event s at all nodes between n’ and n.

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 1 - Topological Map (5)
• Example calculation
Ø Assume that the robot has two nonzero belief states
o p(1-2) = 1.0 ; p(2-3) = 0.2 *
at that it is facing east with certainty
Ø State 2-3 will progress potentially to 3 and 3-4 to 4.
Ø State 3 and 3-4 can be eliminated because the likelihood of detecting an open d oor
is zero.
Ø The likelihood of reaching state 4 is the product of the initial likelihood p(2-3)= 0.2,
(a) the likelihood of detecting anything at node 3 and the likelihood of detecting a
hallway on the left and a door on the right at node 4 and (b) the likelihood of
detecting a hallway on the left and a door on the right at node 4. (for simplicity we
assume that the likelihood of detecting nothing at node 3-4 is 1.0)
Ø This leads to:
o 0.2 ⋅ [0.6 ⋅ 0.4 + 0.4 ⋅ 0.05] ⋅ 0.7 ⋅ [0.9 ⋅ 0.1] → p(4) = 0.003.
o Similar calculation for progress from 1-2 → p(2) = 0.3.

* Note that the probabilities do not sum up to one. For simplici ty normalization was avoided in this example
© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 2 – Grid Map (1)
• Fine fixed decomposition grid (x, y, θ), 15 cm x 15 cm x 1°
Ø Action and perception update
• Action update:
Ø Sum over previous possible positions
and motion model

Ø Discrete version of eq. 5.22


• Perception update:
Ø Given perception i, what is the
probability to be a location l
Courtesy of
W. Burgard

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 2 – Grid Map (2)
• The critical challenge is the calculation of p(i¦l)
Ø The number of possible sensor readings and geometric contexts is extremely large
Ø p(i¦l) is computed using a model of the robot’s sensor behavior, its position l, and
the local environment metric map around l.
Ø Assumptions
o Measurement error can be described by a distribution with a mean
o Non-zero chance for any measurement

Courtesy of
W. Burgard

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 2 – Grid Map (3)
• The 1D case
1. Start
ØNo knowledge at start, thus we have
an uniform probability distribution.
2. Robot perceives first pillar
ØSeeing only one pillar, the probability
being at pillar 1, 2 or 3 is equal.

3. Robot moves
ØAction model enables to estimate the
new probability distribution based
on the previous one and the motion.
4. Robot perceives second pillar
ØBase on all prior knowledge the
probability being at pillar 2 becomes
dominant
© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 2 – Grid Map (4)
• Example 1: Office Building

Courtesy of
W. Burgard

Position 5

Position 3
Position 4 © R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 2 – Grid Map (5)
Courtesy of
• Example 2: Museum W. Burgard

Ø Laser scan 1

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 2 – Grid Map (6)
Courtesy of
• Example 2: Museum W. Burgard

Ø Laser scan 2

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 2 – Grid Map (7)
Courtesy of
• Example 2: Museum W. Burgard

Ø Laser scan 3

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 2 – Grid Map (8)
Courtesy of
• Example 2: Museum W. Burgard

Ø Laser scan 13

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 2 – Grid Map (9)
Courtesy of
• Example 2: Museum W. Burgard

Ø Laser scan 21

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.2
Markov Localization: Case Study 2 – Grid Map (10)
• Fine fixed decomposition grids result in a huge state space
Ø Very important processing power needed
Ø Large memory requirement
• Reducing complexity
Ø Various approached have been proposed for reducing complexity
Ø The main goal is to reduce the number of states that are updated in each
step
• Randomized Sampling / Particle Filter
Ø Approximated belief state by representing only a ‘representative ’ subset
of all states (possible locations)
Ø E.g update only 10% of all possible locations
Ø The sampling process is typically weighted, e.g. put more sample s
around the local peaks in the probability density function
Ø However, you have to ensure some less likely locations are still tracked,
otherwise the robot might get lost
© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Kalman Filter Localization

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Introduction to Kalman Filter (1)
• Two measurements

• Weighted leas-square

• Finding minimum error

• After some calculation and rearrangements

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Introduction to Kalman Filter (2)
• In Kalman Filter notation

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Introduction to Kalman Filter (3)
• Dynamic Prediction (robot moving)
u = velocity
w = noise
• Motion

• Combining fusion and dynamic prediction

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Kalman Filter for Mobile Robot Localization
Robot Position Prediction
• In a first step, the robots position at time step k+1 is predicted based on
its old location (time step k) and its movement due to the control input
u(k):

pˆ ( k + 1 k ) = f ( pˆ ( k k ), u ( k )) f: Odometry function

Σ p ( k + 1 k ) = ∇ p f ⋅ Σ p ( k k ) ⋅ ∇ p f T + ∇u f ⋅ Σu ( k ) ⋅ ∇u f T

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Kalman Filter for Mobile Robot Localization
Robot Position Prediction: Example
 ∆sr + ∆sl ∆sr − ∆sl 
 cos( θ + )
 x (k )  
ˆ 2 2 b
  ∆sr + ∆sl ∆sr − ∆sl  Odometry
pˆ ( k + 1 k ) = pˆ ( k k ) + u(k ) = yˆ ( k ) +  sin( θ + )
   2 2b 
 θˆ ( k )   ∆sr − ∆sl 
 b 

Σ p ( k + 1 k ) = ∇ p f ⋅ Σ p ( k k ) ⋅ ∇ p f T + ∇ u f ⋅ Σ u ( k ) ⋅ ∇u f T

k ∆s 0 
Σ u (k ) =  r r 
 0 kl ∆sl 

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Kalman Filter for Mobile Robot Localization
Observation

• The second step it to obtain the observation Z(k+1) (measurements) from


the robot’s sensors at the new location at time k+1
• The observation usually consists of a set n0 of single observations zj(k+1)
extracted from the different sensors signals. It can represent raw data
scans as well as features like lines, doors or any kind of landmarks.
• The parameters of the targets are usually observed in the sensor frame
{S}.
Ø Therefore the observations have to be transformed to the world f rame {W}
or
Ø the measurement prediction have to be transformed to the sensor frame {S}.
Ø This transformation is specified in the function h i (seen later).

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Kalman Filter for Mobile Robot Localization
Observation: Example
Raw Date of Extracted Lines Extracted Lines
Laser Scanner in Model Space

line j

αj

rj

R
α j  Sensor
z j (k + 1) =   (robot)
 rj  frame

σαα σαr 
Σ R , j (k + 1) = 
 σrα σ rr  j
© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Kalman Filter for Mobile Robot Localization
Measurement Prediction
• In the next step we use the predicted robot position p̂ = (k + 1 k )
and the map M(k) to generate multiple predicted observations zt.
• They have to be transformed into the sensor frame
ẑi (k + 1) = hi (zt , p̂(k + 1 k ))

• We can now define the measurement prediction as the set containi ng


all ni predicted observations
Ẑ (k + 1) = {ẑi (k + 1)(1 ≤ i ≤ ni )}

• The function hi is mainly the coordinate transformation between the


world frame and the sensor frame

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Kalman Filter for Mobile Robot Localization
Measurement Prediction: Example
• For prediction, only the walls that are in
the field of view of the robot are selected.
• This is done by linking the individual
lines to the nodes of the path

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Kalman Filter for Mobile Robot Localization
Measurement Prediction: Example
• The generated measurement predictions have to be transformed to the
robot frame {R}

• According to the figure in previous slide the transformation is given by

and its Jacobian by

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Kalman Filter for Mobile Robot Localization
Matching
• Assignment from observations zj(k+1) (gained by the sensors) to the targets zt (stored in
the map)
• For each measurement prediction for which an corresponding obser vation is found we
calculate the innovation:

and its innovation covariance found by applying the error propag ation law:

• The validity of the correspondence between measurement and prediction can e.g. be
evaluated through the Mahalanobis distance:

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Kalman Filter for Mobile Robot Localization
Matching: Example

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Kalman Filter for Mobile Robot Localization
Matching: Example
• To find correspondence (pairs) of predicted and observed features we
use the Mahalanobis distance

with

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Kalman Filter for Mobile Robot Localization
Estimation: Applying the Kalman Filter
• Kalman filter gain:

• Update of robot’s position estimate:

• The associate variance

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Kalman Filter for Mobile Robot Localization
Estimation: 1D Case
• For the one-dimensional case with we can
show that the estimation corresponds to the Kalman filter for one-
dimension presented earlier.

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.6.3
Kalman Filter for Mobile Robot Localization
Estimation: Example
• Kalman filter estimation of the new robot
position pˆ ( k k ) :
Ø By fusing the prediction of robot position
(magenta) with the innovation gained by
the measurements (green) we get the
updated estimate of the robot position
(red)

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.7
Other Localization Methods (not probabilistic)

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.7.1
Other Localization Methods (not probabilistic)
Localization Baseon Artificial Landmarks

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.7.1
Other Localization Methods (not probabilistic)
Localization Base on Artificial Landmarks

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.7.1
Other Localization Methods (not probabilistic)
Localization Base on Artificial Landmarks

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.7.3
Other Localization Methods (not probabilistic)
Positioning Beacon Systems: Triangulation

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.7.3
Other Localization Methods (not probabilistic)
Positioning Beacon Systems: Triangulation

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.7.3
Other Localization Methods (not probabilistic)
Positioning Beacon Systems: Triangulation

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.7.3
Other Localization Methods (not probabilistic)
Positioning Beacon Systems: Docking

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.7.3
Other Localization Methods (not probabilistic)
Positioning Beacon Systems: Bar-Code

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.7.3
Other Localization Methods (not probabilistic)
Positioning Beacon Systems

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.8
Autonomous Map Building

Starting from an arbitrary initial point,


a mobile robot should be able to autonomously explore the
environment with its on board sensors,
gain knowledge about it,
interpret the scene,
build an appropriate map
and localize itself relative to this map.

SLAM
The Simultaneous Localization and Mapping Problem

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.8
Map Building:
How to Establish a Map
1. By Hand 3. Basic Requirements of a Map:
Ø a way to incorporate newly sensed
information into the existing world model
12 3.5 Ø information and procedures for estimating
the robot’s position
Ø information to do path planning and other
navigation task (e.g. obstacle avoidance)
2. Automatically: Map Building
• Measure of Quality of a map
The robot learns its environment Ø topological correctness
predictability
Ø metrical correctness
Motivation:
- by hand: hard and costly • But: Most environments are a mixture of
- dynamically changing environment predictable and unpredictable features
- different look due to different perception → hybrid approach
model-based vs. behaviour-based
© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.8
Map Building:
The Problems
1. Map Maintaining: Keeping track of 2. Representation and
changes in the environment Reduction of Uncertainty

e.g. disappearing position of robot -> position of wall


cupboard

position of wall -> position of robot

- e.g. measure of belief of each • probability densities for feature positions


environment feature • additional exploration strategies
© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.8.1
General Map Building Schematics

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.8.1
Map Representation
• M is a set n of probabilistic feature locations
• Each feature is represented by the covariance matrix Σt and an
associated credibility factor ct

• ct is between 0 and 1 and quantifies the belief in the existence of the


feature in the environment

• a and b define the learning and forgetting rate and ns and nu are the
number of matched and unobserved predictions up to time k,
respectively.

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.8.1
Autonomous Map Building
Stochastic Map Technique
• Stacked system state vector:

• State covariance matrix:

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.8.2
Cyclic Environments
• Small local error accumulate to arbitrary large global errors!
• This is usually irrelevant for navigation
• However, when closing loops, global error does matter

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.8.2
Dynamic Environments
• Dynamical changes require continuous mapping

• If extraction of high-level features would be


possible, the mapping in dynamic
environments would become
significantly more straightforward.
Ø e.g. difference between human and wall
Ø Environment modeling is a key factor ?
for robustness

© R. Siegwart, I. Nourbakhsh
Autonomous Mobile Robots, Chapter 5 5.8
Map Building:
Exploration and Graph Construction
1. Exploration 2. Graph Construction

explore
on stack
already
examined

Where to put the nodes?

• Topology-based: at distinctive locations


- provides correct topology
- must recognize already visited location
• Metric-based: where features disappear or
- backtracking for unexplored openings get visible

© R. Siegwart, I. Nourbakhsh

You might also like