Implementation of SLAM On Mobile Robots and Stitching of The Generated Maps
Implementation of SLAM On Mobile Robots and Stitching of The Generated Maps
Contents
1
Introduction
1.1
Literature Review
2.1
GMapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2
Linear SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3
2.4
Robots
10
3.1
Qbot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
3.2
FireBird XII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
Bibliography
12
Chapter 1
Introduction
Simultaneous localization and mapping (SLAM) is defined as the problem of building
a map while at the same time localizing the robot within that map. SLAM is considered one of the most fundamental problem for robots to become truly autonomous.
In practice, these two problems cannot be solved independently of each other. Before a robot can answer the question of what the environment looks like given a set
of observations, it needs to know from which locations these observations have been
made. At the same time, it is hard to estimate the current position of a vehicle without
a map. Therefore, SLAM is often referred to as a chicken and egg problem: A good
map is needed for localization while an accurate pose estimate is needed to build a
map. Maps of nearby regions generated by a group of robots can be joined together to
form a big map so that each robot can navigate to any other location in the big map.
Image stitching algorithms can be used to stitch submaps based on common invariant
features in those submaps to obtain the complete map of the environment, thereby
rendering the robot a truly autonomous entity.
SLAM is central to a range of indoor, outdoor, in-air and underwater applications. It finds applications in mobile home maintenance devices like automated vacuum cleaners and lawn mowers. SLAM surveillance with unmanned air vehicles find
applications in weather and disaster prediction, as well as for defense and in terrain
mapping for localization. SLAM robots also find significance in fields where human
exploration could be dangerous, like underwater reef monitoring (to estimate the type
and content of minerals) and underground exploration of abandoned mines.
1.1
Chapter 2 deals with the various SLAM and image stitching algorithms discussed in
literature previously.
Chapter 3 details the features of the two robots on which SLAM is to be implemented with regard to this project.
Chapter 2
Literature Review
2.1
GMapping
2.2
Linear SLAM
Linear SLAM is a strategy for large-scale pose feature and pose graph SLAM through
solving a sequence of linear least squares problems. The algorithm is based on building large-scale maps through submap joining, where submaps are built using any existing SLAM technique that is able to deal with a small size SLAM problem. It is
demonstrated that if submaps coordinate frames are judiciously selected, least squares
objective function for joining two submaps becomes a quadratic function of the state
vector. Therefore, solution to a large-scale SLAM problem that requires joining a
number of local submaps either sequentially or in a more efficient Divide and Conquer manner, can be obtained through solving a sequence of linear least squares problems. The proposed Linear SLAM technique is applicable to both feature-based, pose
graph and D-SLAM, in two and three dimensions, and does not require any assumption on the character of the covariance matrices or an initial guess of the state vector.
Although this algorithm is still an approximation to the optimal full nonlinear least
squares SLAM, simulations and experiments using publicly available datasets in 2D
and 3D show that Linear SLAM produces results that are very close to the best solutions that can be obtained using full nonlinear least squares optimization algorithm
started from an accurate initial value. The input to the Linear SLAM algorithm is
a sequence of local submaps. Each local map contains a state vector estimate and
the corresponding information matrix.Zhao et. al presented a strategy for large-scale
SLAM through solving a sequence of linear least squares problems (Linear SLAM).
The algorithm is based on submap joining where submaps are built using any existing
SLAM technique [2].
2.3
In such scenarios, the Kalman Filters can be used. Unfortunately, in engineering, most
systems are nonlinear, so some attempt was immediately made to apply this filtering
method to nonlinear systems. In estimation theory, the extended Kalman filter (EKF)
is the nonlinear version of the Kalman filter which linearizes about an estimate of the
current mean and covariance. Because the EKF is obtained using a linear approximation of a nonlinear system, it offers no guarantees of optimality in a mean squared
error sense (or in any other sense). The EKF adapted techniques from calculus, namely
multivariate Taylor Series expansions, to linearize a model about a working point. If
the system model (as described below) is not well known or is inaccurate, then Monte
Carlo methods, especially particle filters, are employed for estimation. In the case
of well defined transition models, the EKF has been considered the de facto standard
in the theory of nonlinear state estimation, navigation systems and GPS. Methods of
optimizing the standard Kalman filter models for Simultaneous localization and Map
building was proposed by Guivant [3].
2.4
SIFT is an algorithm to detect and describe local features in images. For any object
in an image, interesting points on the object can be extracted to provide a feature description of the object. This description can then be used to identify the object when
attempting to locate the object in a test image containing many other objects. To perform reliable recognition, it is important that the features extracted from the training
image be detectable even under changes in image scale, noise and illumination. Such
points usually lie on high-contrast regions of the image, such as object edges.
The SIFT technique is one of the most robust and widely used image matching
algorithm based on local features. It ensures a good mosaic image and a reliable result. SIFT is a feature detection and description technique. SIFT produces key point
descriptors which describes the image features [4]. SIFT technique has four computational steps for extracting key points: scale-space peak selection, keypoint localization, orientation assignment and defining key-point descriptors. To each image, it
builds image pyramid by generating progressively blurred out images and it subtracts
neighbor images to get the difference of Gaussian (DOG) pyramid. Then, it detects the
extreme for DOG pyramid. The number of keypoints was reduced to help in increasing efficiency and also the robustness of the technique. Key points are rejected if they
had a low contrast or if they were located on an edge. The following step is orientation
assignment which uses orientation histogram to statistics the gradient orientation with
sampling the center neighborhood of the key points. The last step is to describe the
key points. Lowe proposed a method for extracting distinctive invariant features from
images that can be used to perform reliable matching between different views of an
object or scene [5]. Adel et.al, in their work, carried out a survey on the techniques
used for image stitching based on feature extraction [6], in which SIFT algorithm was
regarded as the most widely used algorithm for image mapping. Suzuki et.al, developed a SIFT based monovular SLAM algorithm for a small unmanned aerial vehicle
[7].
Chapter 3
Robots
3.1
Qbot
WiFi
Platform :
0.7 m/s
Sensors:
3 digital bumper sensors
3 digital wheel drop sensors
3 analog and digital cliff sensors
3-axis gyroscope
10
3.2
FireBird XII
Firebird XII is a four wheel differential drive robot equipped with onboard Intel Corei3 computer and various sensors.
Sensors:
Laser Range Finder.
9 DOF IMU consisting of 3 axis Digital Gyroscope, 3 axis accelerometer and 3
axis Magnetometer.
GPS receiver module.
Battery Voltage, Current and temperature monitoring.
On-board camera with pan tilt movement.
Locomotion:
High performance position and velocity control
4 Wheel differential drive configuration
Quadrature position encoders
Velocity: 100 cm/second
Communication:
2.4GHz wireless module on the robot with USB wireless module for external
PC communication
11
Chapter 4
Bibliography
[1] G. Grisetti, C. Stachniss, and W. Burgard, Improved techniques for grid mapping
with rao-blackwellized particle filters, Trans. Rob., vol. 23, pp. 3446, Feb. 2007.
[2] L. Zhao, S. Huang, and G. Dissanayake, Linear slam: A linear solution to the
feature-based and pose graph slam based on submap joining, in 2013 IEEE/RSJ
International Conference on Intelligent Robots and Systems, pp. 2430, Nov
2013.
[3] J. E. Guivant and E. M. Nebot, Optimization of the simultaneous localization
and map-building algorithm for real-time implementation, IEEE Transactions on
Robotics and Automation, vol. 17, pp. 242257, Jun 2001.
[4] F. Alhwarin, C. Wang, D. Ristic-Durrant, and A. Graser, Improved sift-features
matching for object recognition, in Proceedings of the 2008 International Conference on Visions of Computer Science: BCS International Academic Conference, VoCS08, (Swinton, UK, UK), pp. 179190, British Computer Society,
2008.
[5] D. G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, vol. 60, no. 2, pp. 91110, 2004.
[6] E. Adel, M. Elmogy, and H. Elbakry, Article: Image stitching based on feature
extraction techniques: A survey, International Journal of Computer Applications, vol. 99, pp. 18, August 2014. Full text available.
12
[7] T. Suzuki, Y. Amano, and T. Hashizume, Development of a sift based monocular ekf-slam algorithm for a small unmanned aerial vehicle, in SICE Annual
Conference (SICE), 2011 Proceedings of, pp. 16561659, Sept 2011.
13