Sse2 08 Robust Slam
Sse2 08 Robust Slam
Cyrill Stachniss
9 10
Sum of Gaussians with k modes The log cannot be moved inside the sum!
11 12
Max-Mixture Approximation Max-Mixture Approximation
§ Instead of computing the sum of
Gaussians at , compute the
maximum of the Gaussians
13
approximation error 14
23 24
Performance (100 outliers) MaxMixtues for Dealing with
Outliers
§ Supports multi-model constraints
§ Approximate the sum of Gaussians
using the max operator
§ Idea: “Select the best mode of a sum
of Gaussians and use it as if it would
be a single Gaussian”
§ Easy to use, quite effective
Gauss-Newton MM Gauss-Newton
25 26
27 28
Dynamic Covariance Scaling Scaling Parameter
29 30
Both have
squared error
31 32
Dynamic Covariance Scaling Dynamic Covariance Scaling
Original
error
Linearization
Scaled
error
33 34
35 36
Optimizing With Outliers
§ Assuming a Gaussian error in the
constraints is not always realistic
Least Squares § Large errors are problematic
with Robust Kernels
37 38
39 40
Robust M-Estimators: Different Rho Functions
Gaussian Case
§ Gaussian:
§ Kernel function used to define the § Absolute values (L1 norm):
PDF § Huber M-estimator
41 42
§ Robust Estimation
45 46
47 48
Robust Estimation Robust Least Squares
as Weighted Least Squares
§ We can use the weighted least squares
§ Compare both equations approach to realize robust L.S.
§ The kernel will impact the Jacobians
§ The rest stays the same
§ The choice of the kernel must align
with the outlier distribution
§ We can use weighted least squares if
we set the weight using the kernel as:
49 50
51 52
Which Function to Chose? Which Function to Chose?
§ Which loss/kernel function to chose? § Which loss/kernel function to chose?
§ It depends on the type of outliers! § It depends on the type of outliers!
Some approaches combine them:
1. Start with a strong tails
for N1 iterations
2. N2 iteration with weaker tails
3. Remove all outliers larger c
4. Gaussian/Huber for the rest
53 54
Source: A General and Adaptive Robust Loss Function, Barron [CVPR 2019] 55 56
Recommended Video to Watch Adaptive Robust Loss Function
https://www.youtube.com/watch?v=BmNKbnF69eY
57 58
59 60
Adaptive Robust Loss Function Adaptive Robust Loss Function
§ Adaptive loss function defined as § Adaptive loss function defined as
§ with § with
§ with
We can limit
the range of
outliers to
maintain a pdf
63 64
Joint Optimization with the Joint Optimization with the
Adaptive Robust Kernel Adaptive Robust Kernel
§ In theory, we can now solve a joint § Joint optimization of :
optimization problem
Problems in practice:
§ New Jacobians need to be computed
§ in the weighted least squares sense § can dominate the parameter
using as our robust kernel estimation for complex problems
§ Sensitive to initial guess
65 66
Slide Information
§ These slides have been created by Cyrill Stachniss as part of
the robot mapping course taught in 2012/13 and 2013/14. I
created this set of slides partially extending existing material
of Edwin Olson, Pratik Agarwal, and myself.
§ I tried to acknowledge all people that contributed image or
video material. In case I missed something, please let me
know. If you adapt this course material, please make sure
you keep the acknowledgements.
§ Feel free to use and change the slides. If you use them, I
would appreciate an acknowledgement as well. To satisfy my
own curiosity, I appreciate a short email notice in case you
use the material in your course.
§ My video recordings are available through YouTube:
http://www.youtube.com/playlist?list=PLgnQpQtFTOGQrZ4O5QzbIHgl3b1JHimN_&feature=g-list