0% found this document useful (0 votes)
205 views13 pages

Implementation of SLAM On Mobile Robots and Stitching of The Generated Maps

SLAM implementation on mobile robots and stitching of the sub maps generated to obtain the whole map of the environment

Uploaded by

Tay Lin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
205 views13 pages

Implementation of SLAM On Mobile Robots and Stitching of The Generated Maps

SLAM implementation on mobile robots and stitching of the sub maps generated to obtain the whole map of the environment

Uploaded by

Tay Lin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Abstract

Recent developments in sensor and network technologies enables a group of robots to


connect in order to share information, learn from their peers and work together. With
the benefit of this knowledge shared on a large scale, robots will create rich maps of
the real world and objects in it. This project aims to implement Simultaneous Localization and Mapping (SLAM) on differential drive robot and skid steered robots and
stitch maps of nearby environments constructed by these robots. SLAM is the computational problem of constructing or updating a map of an unknown environment while
simultaneously keeping track of an agents location within it. While this initially appears to be a challenging problem with various unknown variables, there are several
algorithms known for solving it in tractable time for certain environments. The mapping of the environment is carried out using particle filter or extended Kalman filter
algorithm with the data acquired from sensors such as Kinect or laser range finder. Image stitching can be used to form an environment map from various maps generated
by the robots, so that they can share their map with other agents.

Contents
1

Introduction

1.1

Outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . .

Literature Review

2.1

GMapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.2

Linear SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.3

Extended Kalman Filter (EKF) . . . . . . . . . . . . . . . . . . . . .

2.4

Scale Invariant Feature Transform (SIFT) . . . . . . . . . . . . . . .

Robots

10

3.1

Qbot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10

3.2

FireBird XII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

Bibliography

12

Chapter 1
Introduction
Simultaneous localization and mapping (SLAM) is defined as the problem of building
a map while at the same time localizing the robot within that map. SLAM is considered one of the most fundamental problem for robots to become truly autonomous.
In practice, these two problems cannot be solved independently of each other. Before a robot can answer the question of what the environment looks like given a set
of observations, it needs to know from which locations these observations have been
made. At the same time, it is hard to estimate the current position of a vehicle without
a map. Therefore, SLAM is often referred to as a chicken and egg problem: A good
map is needed for localization while an accurate pose estimate is needed to build a
map. Maps of nearby regions generated by a group of robots can be joined together to
form a big map so that each robot can navigate to any other location in the big map.
Image stitching algorithms can be used to stitch submaps based on common invariant
features in those submaps to obtain the complete map of the environment, thereby
rendering the robot a truly autonomous entity.
SLAM is central to a range of indoor, outdoor, in-air and underwater applications. It finds applications in mobile home maintenance devices like automated vacuum cleaners and lawn mowers. SLAM surveillance with unmanned air vehicles find
applications in weather and disaster prediction, as well as for defense and in terrain
mapping for localization. SLAM robots also find significance in fields where human
exploration could be dangerous, like underwater reef monitoring (to estimate the type
and content of minerals) and underground exploration of abandoned mines.

1.1

Outline of the thesis

Chapter 2 deals with the various SLAM and image stitching algorithms discussed in
literature previously.
Chapter 3 details the features of the two robots on which SLAM is to be implemented with regard to this project.

Chapter 2
Literature Review
2.1

GMapping

GMapping is a highly efficient Rao-Blackwellized particle filter to learn grid maps


from laser range data. Recently Rao-Blackwellized particle filters have been introduced as effective means to solve the simultaneous localization and mapping (SLAM)
problem. Murphy, Doucet and colleagues introduced Rao-Blackwellized particle filters as an effective means to solve the simultaneous localization and mapping problem.
The main problem of the Rao-Blackwellized approaches is their complexity measured
in terms of the number of particles required to build an accurate map. Therefore, reducing this quantity is one of the major challenges for this family of algorithms. Additionally, the resampling step can potentially eliminate the correct particle. This effect
is also known as the particle depletion problem or as particle impoverishment. Two
approaches are used to substantially increase the performance of Rao-Blackwellized
particle filters applied to solve the SLAM problem with grid maps:
A proposal distribution that considers the accuracy of the robots sensors and
allows us to draw particles in a highly accurate manner.
An adaptive resampling technique which maintains a reasonable variety of particles and in this way enables the algorithm to learn an accurate map while
reducing the risk of particle depletion.
The proposal distribution is computed by evaluating the likelihood around a particle5

dependent most likely pose obtained by a scan-matching procedure combined with


odometry information. In this way, the most recent sensor observation is taken into
account for creating the next generation of particles. This allows us to estimate the
state of the robot according to more informed (and thus more accurate) model than
the one obtained based only on the odometry information. The use of this refined
model has two effects. The map is more accurate since the current observation is
incorporated into the individual maps after considering its effect on the pose of the
robot. This significantly reduces the estimation error so that less particles are required
to represent the posterior. The second approach, the adaptive resampling strategy, allows us to perform a resampling step only when needed and in this way keeping a
reasonable particle diversity. This results in a significantly reduced risk of particle depletion. Grisetti et.al,in theor work, proposed an adaptive technique for reducing this
number in a Rao Blackwellized particle filter for learning grid maps and presented an
approach to compute an accurate proposal distribution taking into account not only
the movement of the robot but also the most recent observation, thereby drastically
decreasing the uncertainty about the robots pose in the prediction step of the filter
[1].

2.2

Linear SLAM

Linear SLAM is a strategy for large-scale pose feature and pose graph SLAM through
solving a sequence of linear least squares problems. The algorithm is based on building large-scale maps through submap joining, where submaps are built using any existing SLAM technique that is able to deal with a small size SLAM problem. It is
demonstrated that if submaps coordinate frames are judiciously selected, least squares
objective function for joining two submaps becomes a quadratic function of the state
vector. Therefore, solution to a large-scale SLAM problem that requires joining a
number of local submaps either sequentially or in a more efficient Divide and Conquer manner, can be obtained through solving a sequence of linear least squares problems. The proposed Linear SLAM technique is applicable to both feature-based, pose
graph and D-SLAM, in two and three dimensions, and does not require any assumption on the character of the covariance matrices or an initial guess of the state vector.

Although this algorithm is still an approximation to the optimal full nonlinear least
squares SLAM, simulations and experiments using publicly available datasets in 2D
and 3D show that Linear SLAM produces results that are very close to the best solutions that can be obtained using full nonlinear least squares optimization algorithm
started from an accurate initial value. The input to the Linear SLAM algorithm is
a sequence of local submaps. Each local map contains a state vector estimate and
the corresponding information matrix.Zhao et. al presented a strategy for large-scale
SLAM through solving a sequence of linear least squares problems (Linear SLAM).
The algorithm is based on submap joining where submaps are built using any existing
SLAM technique [2].

2.3

Extended Kalman Filter (EKF)

Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm


that uses a series of measurements observed over time, containing statistical noise and
other inaccuracies, and produces estimates of unknown variables that tend to be more
precise than those based on a single measurement alone, by using Bayesian inference
and estimating a joint probability distribution over the variables for each timeframe.
The algorithm works in a two-step process. In the prediction step, the Kalman filter
produces estimates of the current state variables, along with their uncertainties. Once
the outcome of the next measurement (necessarily corrupted with some amount of error, including random noise) is observed, these estimates are updated using a weighted
average, with more weight being given to estimates with higher certainty. The algorithm is recursive. It can run in real-time, using only the present input measurements
and the previously calculated state and its uncertainty matrix; no additional past information is required. The Kalman filter does not make any assumption that the errors are
Gaussian. However, the filter yields the exact conditional probability estimate in the
special case that all errors are Gaussian-distributed. The Kalman Filter is the optimal
estimate for linear system models with additive independent white noise in both the
transition and the measurement systems.
One of the most appealing approaches to solving the SLAM problem is by modeling the environment and sensors and assuming that errors have a Gaussian distribution.

In such scenarios, the Kalman Filters can be used. Unfortunately, in engineering, most
systems are nonlinear, so some attempt was immediately made to apply this filtering
method to nonlinear systems. In estimation theory, the extended Kalman filter (EKF)
is the nonlinear version of the Kalman filter which linearizes about an estimate of the
current mean and covariance. Because the EKF is obtained using a linear approximation of a nonlinear system, it offers no guarantees of optimality in a mean squared
error sense (or in any other sense). The EKF adapted techniques from calculus, namely
multivariate Taylor Series expansions, to linearize a model about a working point. If
the system model (as described below) is not well known or is inaccurate, then Monte
Carlo methods, especially particle filters, are employed for estimation. In the case
of well defined transition models, the EKF has been considered the de facto standard
in the theory of nonlinear state estimation, navigation systems and GPS. Methods of
optimizing the standard Kalman filter models for Simultaneous localization and Map
building was proposed by Guivant [3].

2.4

Scale Invariant Feature Transform (SIFT)

SIFT is an algorithm to detect and describe local features in images. For any object
in an image, interesting points on the object can be extracted to provide a feature description of the object. This description can then be used to identify the object when
attempting to locate the object in a test image containing many other objects. To perform reliable recognition, it is important that the features extracted from the training
image be detectable even under changes in image scale, noise and illumination. Such
points usually lie on high-contrast regions of the image, such as object edges.
The SIFT technique is one of the most robust and widely used image matching
algorithm based on local features. It ensures a good mosaic image and a reliable result. SIFT is a feature detection and description technique. SIFT produces key point
descriptors which describes the image features [4]. SIFT technique has four computational steps for extracting key points: scale-space peak selection, keypoint localization, orientation assignment and defining key-point descriptors. To each image, it
builds image pyramid by generating progressively blurred out images and it subtracts
neighbor images to get the difference of Gaussian (DOG) pyramid. Then, it detects the

extreme for DOG pyramid. The number of keypoints was reduced to help in increasing efficiency and also the robustness of the technique. Key points are rejected if they
had a low contrast or if they were located on an edge. The following step is orientation
assignment which uses orientation histogram to statistics the gradient orientation with
sampling the center neighborhood of the key points. The last step is to describe the
key points. Lowe proposed a method for extracting distinctive invariant features from
images that can be used to perform reliable matching between different views of an
object or scene [5]. Adel et.al, in their work, carried out a survey on the techniques
used for image stitching based on feature extraction [6], in which SIFT algorithm was
regarded as the most widely used algorithm for image mapping. Suzuki et.al, developed a SIFT based monovular SLAM algorithm for a small unmanned aerial vehicle
[7].

Chapter 3
Robots
3.1

Qbot

The Quanser QBot 2 for QUARC is an innovative open-architecture autonomous


ground robot built on a 2-wheel mobile platform. Equipped with built-in sensors,
a vision system, and accompanied by extensive courseware, the QBot 2 is ideally
suited for teaching undergraduate and advanced robotics and mechatronics courses.
The open-architecture control structure allows users to add other off-the-shelf sensors
and customize the QBot 2 for various research areas.
On-board computer:

Gumstix DuoVero Zephyr with integrated 802.11 b/g/n

WiFi
Platform :

Kobuki mobile base by Yujin Robot

Maximum linear speed:

0.7 m/s

Sensors:
3 digital bumper sensors
3 digital wheel drop sensors
3 analog and digital cliff sensors
3-axis gyroscope
10

2 wheel encoder inputs


1 Z-axis angle measurement (heading)
1 battery voltage sensor
1 Kinect RGBD sensor

3.2

FireBird XII

Firebird XII is a four wheel differential drive robot equipped with onboard Intel Corei3 computer and various sensors.
Sensors:
Laser Range Finder.
9 DOF IMU consisting of 3 axis Digital Gyroscope, 3 axis accelerometer and 3
axis Magnetometer.
GPS receiver module.
Battery Voltage, Current and temperature monitoring.
On-board camera with pan tilt movement.
Locomotion:
High performance position and velocity control
4 Wheel differential drive configuration
Quadrature position encoders
Velocity: 100 cm/second
Communication:
2.4GHz wireless module on the robot with USB wireless module for external
PC communication

11

Chapter 4
Bibliography
[1] G. Grisetti, C. Stachniss, and W. Burgard, Improved techniques for grid mapping
with rao-blackwellized particle filters, Trans. Rob., vol. 23, pp. 3446, Feb. 2007.
[2] L. Zhao, S. Huang, and G. Dissanayake, Linear slam: A linear solution to the
feature-based and pose graph slam based on submap joining, in 2013 IEEE/RSJ
International Conference on Intelligent Robots and Systems, pp. 2430, Nov
2013.
[3] J. E. Guivant and E. M. Nebot, Optimization of the simultaneous localization
and map-building algorithm for real-time implementation, IEEE Transactions on
Robotics and Automation, vol. 17, pp. 242257, Jun 2001.
[4] F. Alhwarin, C. Wang, D. Ristic-Durrant, and A. Graser, Improved sift-features
matching for object recognition, in Proceedings of the 2008 International Conference on Visions of Computer Science: BCS International Academic Conference, VoCS08, (Swinton, UK, UK), pp. 179190, British Computer Society,
2008.
[5] D. G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, vol. 60, no. 2, pp. 91110, 2004.
[6] E. Adel, M. Elmogy, and H. Elbakry, Article: Image stitching based on feature
extraction techniques: A survey, International Journal of Computer Applications, vol. 99, pp. 18, August 2014. Full text available.
12

[7] T. Suzuki, Y. Amano, and T. Hashizume, Development of a sift based monocular ekf-slam algorithm for a small unmanned aerial vehicle, in SICE Annual
Conference (SICE), 2011 Proceedings of, pp. 16561659, Sept 2011.

13

You might also like