Unit 4 Probabilistic map based localization
Unit 4 Probabilistic map based localization
Importance in Robotics: Accurate localization is crucial for navigation, path planning, and
obstacle avoidance in mobile robots.
2. Challenges in Localization
One of the primary challenges in localization is uncertainty. Robots rely on sensors (such as
cameras, LIDAR, and ultrasonic sensors), which are often noisy and unreliable.
Environmental factors, sensor limitations, and mechanical inaccuracies contribute to
localization errors. Additionally, environments can be dynamic, with moving obstacles or
changing layouts, making accurate localization even more challenging.
The probabilistic approach to localization uses probability theory to handle the inherent
uncertainty in sensor measurements. Instead of trying to pinpoint the exact location of a
robot, probabilistic localization estimates a "belief" or probability distribution over possible
locations. This approach allows for robust localization even when sensor data is noisy or
incomplete.
The belief distribution is a probability distribution over all possible locations of the robot. It
represents the robot's understanding of where it might be within the environment based on the
information available.
where:
In probabilistic localization, the prior is the initial estimate of the robot's location before
incorporating any new sensor data. The posterior is the updated belief after integrating new
sensor information.
The localization process can be broken down into two main steps:
Before receiving any new sensor data, the robot predicts its new location based on the prior
belief and any motion data from control inputs (such as moving forward or turning).
The predicted belief (prior) for the current state x t is computed as:
where:
When new sensor data z t is received, it updates the predicted belief (prior) to obtain the
posterior.
Bel( x t ¿= η P( z t ∣ x t ) Bel_pred( x t )
where:
Bayes’ theorem is the foundation for updating the belief based on new evidence. In
localization, it allows us to update the belief about the robot’s position by combining prior
beliefs and new sensor measurements.
P ( z t ∣ x t ). P(x t ∨z 1: t−1 , u1 :t )
P( x t ∨z1 : t , u1 :t ¿ = P(z t ∨z 1 :t −1 , u1 :t )
Where:
In practice, Bayes’ theorem simplifies to an iterative process where each new sensor
measurement updates the prior belief, refining the robot’s estimate of its position.
Metric Maps: Represent the environment with precise measurements, often in the
form of a grid where each cell has coordinates and the environment’s features are
mapped to a scale. Probabilistic methods, such as particle filters, are effective on
metric maps.
Topological Maps: Represent the environment as a network of interconnected
locations or landmarks. Localization with topological maps may rely more on relative
positions rather than exact coordinates.
Hybrid Maps: Combine metric and topological information, providing flexibility to
handle both precise and relative positioning.
Several algorithms implement probabilistic localization, but two common methods are the
Kalman Filter and Particle Filter.
Kalman Filter: Effective for environments where the system’s dynamics and
measurements are linear. It maintains a Gaussian belief distribution, which simplifies
calculations but limits it to unimodal distributions (i.e., single location).
Particle Filter: Also known as Monte Carlo Localization (MCL), this method uses a
set of particles to represent the belief distribution. Each particle represents a potential
state (position and orientation) of the robot, and their distribution reflects the
probability of different locations. Particle filters are robust in handling non-linearities
and multi-modal distributions, making them suitable for complex environments.
KALMAN FILTER
PARTICLE FILTER
Particle filters (PF) are sequential Monte Carlo methods under the Bayesian estimation
framework and have been widely used in many fields such as signal processing, target
tracking, mobile robot localization, image processing, and various economics applications.
The key idea is to represent the next probability density function (PDF) of the state variables
by a set of random samples or particles with associated weights, and compute estimates based
on these samples and weights. PF can estimate the system states sufficiently when the number
of particles (estimations of the state vectors which evolve in parallel) is large. The PF can be
applied to any state transition or measurement model, and it does not matter if some errors in
inertial sensors exhibit complex stochastic characteristics. These errors are hard to model
using a linear estimator such as the Kalman filter because of their high inherent nonlinearity
and randomness. However, this method has not yet become popular in the industry because
implementation details are missing in the available research literature, and because its
computational complexity has to be handled in real-time applications. The first method
discussed it is the triangulation by WIFI (IEEE 802.11 WLAN), which consists in identifying
access points in the environment. One advantage in using WIFI technology is its frequent use
in indoor environments.
WIFI
According to WIFI-alliance, over 700 million people use WIFI and there are about 800
million new WIFI devices every year. This freely available wireless infrastructure prompted
many researchers to develop WIFI -based positioning systems for indoor environments.
Three main approaches for WIFI -based positioning exist: time-based, angle-based, and
signal-strength-based approaches. Often times, however, there are no available WIFI access
points and it is necessary to find a new manner of identifying the environment.
Omnidirectional cameras represent a cheap solution and many features of the environment
can be extracted from an image.
1. Prediction: Based on the robot’s previous position and control inputs (e.g., movement
commands), a predicted belief distribution is generated. This step accounts for the
expected change in the robot's location.
2. Measurement Update: Sensor data is used to update the predicted belief, refining it
by comparing actual sensor readings with expected measurements derived from the
map. Using Bayes’ theorem, the belief is updated to better reflect the robot’s actual
position.
3. Resampling (in Particle Filter): In particle filters, particles with higher weights (i.e.,
those closer to the likely position) are duplicated, while those with low weights are
discarded. This resampling step helps concentrate the belief around the most probable
locations.
Advantages:
o Robustness to sensor noise and environmental changes.
o Capability to handle non-linearities and multi-modal distributions (especially
with Particle Filter).
o Provides continuous localization updates, which helps in dynamic
environments.
Limitations:
o Computational complexity, especially in dense environments requiring
numerous particles.
o Sensor-dependent performance; localization accuracy can degrade if sensor
data is highly unreliable.
o Challenges in real-time applications due to the heavy computational load.
10. Conclusion