Research On Multi-Sensor Fusion SLAM Algorithm Based On Improved Gmapping

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Received 2 January 2023, accepted 5 February 2023, date of publication 9 February 2023, date of current version 14 February 2023.

Digital Object Identifier 10.1109/ACCESS.2023.3243633

Research on Multi-Sensor Fusion SLAM


Algorithm Based on Improved Gmapping
CHENGJUN TIAN , HAOBO LIU , ZHE LIU , HONGYANG LI , AND YUYU WANG
School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, China
Corresponding author: Chengjun Tian ([email protected])

ABSTRACT Simultaneous Localization and Mapping (SLAM) is the core technology of the intelligent robot
system, and it is also the basis for its autonomous movement. In recent years, it has been found that SLAM
using a single sensor has certain limitations, such as Inertial Measurement Unit (IMU) noise and serious
drift, and 2D radar can only detect environmental information on the same horizontal plane. In this regard,
this paper constructs a multi-sensor back-end fusion SLAM algorithm that combines vision, laser, encoder
and IMU information. Experiments have proved that compared with using a single sensor, the application
of a multi-sensor fusion system makes the edges of the constructed map clearer and the noise reduced.
Aiming at the problem of increased calculation caused by particle degradation and too many particles, this
paper improves the Gmappping algorithm, and uses the combination of selective resampling and Kullback-
Leibler Distance (KLD) sampling to complete resampling. It has been proved by experiments that compared
with the original algorithm of Gmapping, the application of the improved algorithm increases the particle
convergence speed by 39.85% in the process of indoor mapping. Aiming at the problems that the traditional
loop detection algorithm is easily affected by environmental factors, resulting in low detection accuracy, and
the loop detection algorithm based on deep convolutional neural network has a large amount of calculation
and takes a long time to detect. The main research of this paper is to apply a deep learning-based loop
detection algorithm on the multi-sensor fusion framework, and use the combination of high-dimensional and
low-dimensional features of the image for loop detection. This paper uses different algorithms to conduct
comparative experiments on the dataset CityCentre. The experimental results show that compared with the
traditional algorithms Bag of Words (BoW), AlexNet algorithm, VGG19 algorithm, and ResNet32 algorithm,
the accuracy of the algorithm proposed in this paper has increased by 31.26%, 14.21%, 3.05%, and 1.56%,
respectively. In addition, the comparison experiment results of SLAM mapping with the original Real-Time
Appearance-Based Mapping (RTAB-MAP) algorithm prove that the loop closure detection algorithm based
on deep learning proposed in this paper can enable the system to better build a globally consistent map,
including more environmental information.

INDEX TERMS Laser SLAM, Gmapping algorithm, multi-sensor fusion SLAM system, loop closure
detection.

I. INTRODUCTION the development of artificial intelligence. There are three


In recent years, intelligent robots have played an important main types of SLAM [1] systems: laser-based SLAM sys-
role in contactless services such as material distribution, tems, multi-sensor-based SLAM systems, and vision-based
regional disinfection, and public area security patrols. Simul- SLAM systems.
taneous Localization and Mapping (SLAM) as the basic tech- The classic SLAM system includes five modules: sensor
nology of intelligent robots has significantly benefited from data reading, front-end visual odometry, loop closure detec-
tion, nonlinear back-end optimization, and building maps.
The associate editor coordinating the review of this manuscript and Among them, loop closure detection is an essential link in
approving it for publication was Wei Wei . the visual SLAM system, and it is also an effective method

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
13690 VOLUME 11, 2023
C. Tian et al.: Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping

to eliminate accumulated errors. Traditional loop closure alternatively performs selective resampling and Kullback-
detection algorithms usually use artificially designed fea- Leibler Distance (KLD) sampling.In summary, this paper
tures, and the problems mainly concentrate on unsatisfactory studies three multi-sensor fusion SLAM frameworks of laser
accuracy and a large amount of calculation. In recent years, and encoder; laser, encoder, and Inertial Measurement Unit
the research of loop closure detection algorithms based on (IMU) [8]; laser, encoder, IMU and vision, and conducts
deep learning has gradually emerged and has shown excellent related experiments.
performance. In summary, this paper considers integrating the This paper contributes to practical applications. On the one
deep learning-based loop closure detection algorithm into the hand, the Gmapping [9] algorithm can build indoor maps in
multi-sensor SLAM system, which could improve the sys- real time, and requires less calculation and higher precision
tem’s overall performance. Therefore, this paper introduces a to build small scene maps. However, as the number of par-
deep learning-based loop closure detection algorithm into the ticles required increases as the scene grows, because each
multi-sensor fusion SLAM system. We used this loop closure particle carries a map, the amount of memory and calcula-
detection method to replace the traditional loopback detection tion required to construct a large map will increase, and the
algorithm provided by Real-Time Appearance-Based Map- convergence speed will become slower and slower. Through
ping (RTAB-MAP) [2] and respectively conducted SLAM the improved method in this paper, the particle convergence
mapping experiments. speed is increased, and the shortcomings of the Gmapping
Although the lidar used in laser SLAM has the advantages algorithm in large scenes are effectively overcome. And for
of high measurement accuracy, fast speed, and a simple mea- indoor planar motion robots (such as sweepers, food delivery
surement error model, it has obvious shortcomings. 2D radar robots, etc.), the effect will be better, and the particle conver-
[3] can only detect environmental information on the same gence speed will be faster, which can improve the cleaning
level, and the cost of 3D lidar is too high. For other sensors, speed of the sweeper and the food delivery rate. Bring better
although the price of IMU is low and the frequency is high, user experience to users. On the other hand, single sensors
it has high noise and severe drift. The wheel encoder has have certain limitations in application. Single-sensor SLAM
a significant error when the wheel slips or runs on uneven systems based on lidar sensors often fail in unstructured
ground; When the camera cannot obtain depth information, scenarios such as long corridors or flat open spaces. The
there is a significant deviation in the estimation. Only using a performance of vision-based single-sensor systems is sen-
single sensor can only meet the needs of a specific scene and sitive to initialization, light intensity, and changes. In this
cannot adequately deal with complex environments. There- paper, the fusion of lidar, camera and IMU can overcome such
fore, multi-sensor SLAM technology [4] has become a hot problems, and effectively improve the robustness, accuracy
spot in the field of SLAM research. This technology inputs and reliability of the system. More and more multi-sensor
the information obtained by various sensors into the SLAM fusion frameworks are applied to the field of robotics and
system, and uses the information fusion algorithm to use the autonomous driving.
information fusion results as the pose estimation, trajectory In summary, the main contributions of this paper are as
estimation and mapping of the mobile robot [5], so that the follows:
mobile robot has a multi-dimensional environment. Informa- • The particle filter-based Gmapping algorithm has two
tion acquisition capability, able to implement a robust SLAM main problems: particle degradation and excessive num-
system in complex environments. ber of particles leading to an increase in the amount of
The classic laser SLAM [6], [7] algorithms include the calculation. This paper makes the following improve-
Gmapping algorithm, Cartographer algorithm, and Hec- ments to the Gmapping algorithm for the above two
torSLAM algorithm. Among them, the Gmapping algorithm problems. Selective resampling and KLD sampling are
based on particle filter has the following advantages: alternately carried out. resampling stage. The exper-
• Ability to build indoor maps in real-time; iment proves that the particle convergence speed of
• When building a small scene map, the amount of com- the improved Gmapping algorithm in this paper has
putation required is small, and the accuracy is high; increased by 39.85%, and a better improvement result
• Effective use of odometer information, Etc. has been obtained.
However, the disadvantages of the Gmapping Algorithm • Aiming at the limitations of a single sensor or a small
will lead to particle degradation, the use of excessive par- number of sensor fusions, this paper studies three types
ticles, and an increase in the amount of calculation. The of multi-sensor fusion SLAM frameworks. The exper-
disadvantages are as follows: iments have fully demonstrated that the overall perfor-
• Need information from odometer; mance of the SLAM system with the fusion of vision,
• Not suitable for drones and rough terrain; laser, encoder and IMU [10] is superior.
• No loop closure detection module; • As an important part of the visual SLAM system, loop
• When building a large scene, there are too many particles detection is especially important for the SLAM system
and a massive amount of calculation. of multi-sensor fusion. In this paper, instead of the origi-
Therefore, this paper improves the Gmapping algorithm nal traditional loop detection algorithm, a loop detection
and changes the original sampling method to a method that algorithm based on deep learning is introduced into the

VOLUME 11, 2023 13691


C. Tian et al.: Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping

multi-sensor fusion SLAM system. Experiments have improvements to the RBPF algorithm. On the one hand,
proved that the improved system can better construct a an improved proposal distribution is adopted to reduce the
globally consistent map. number of particles used during the run. On the other hand,
The rest of this article is arranged as follows: the sec- selective resampling is used to reduce the resampling fre-
ond part mainly introduces the related work, including the quency.
classic algorithm of laser SLAM, the current situation of • Improved proposal distribution
multi-sensor fusion, Etc.; the third part mainly introduces the Sampling can be done from a distribution to obtain the
improved Gmapping method, multi-sensor fusion theory and pose estimation of the robot at the next moment, which
the loop closure applied in this paper Detection method; The is the proposed distribution. The proposed distribution is
fourth part mainly introduces the relevant experiments and just a surrogate for the target distribution. The difference
the analysis of the experimental results in this paper, and the between the two is the weight values of the particles.
fifth part is the conclusion. In order to solve the problem of too many particles
and particle degradation, the Gmapping algorithm pro-
II. RELATED WORK poses to use the latest lidar observation data to improve
A SLAM system that uses only lidar or a fusion of other the proposal distribution, then the proposal distribution
sensors with lidar is called a laser SLAM system. Classic laser becomes:
SLAM algorithms include: Gmapping algorithm, Cartogra- 
(i) (i)

pher algorithm and Hector SLAM. A visual SLAM system P xt |mt−1 , xt−1 , zt , ut−1
uses a camera as the main sensor. The visual SLAM scheme (i) (i)
 
P zt |mt−1 , xt P(xt |xt−1 , ut−1 )
mainly studied in this paper is the RTAB-MAP algorithm. = (3)
(i) (i)
P(zt |mt−1 , x t−1 , ut−1 )
A. CLASSIC ALGORITHM OF LASER SLAM
1) GMAPPING ALGORITHM
• Selective resampling
Resampling also determines the performance of parti-
As we all know, the basic problem that SLAM needs to solve
cle filtering. The Gmapping algorithm uses an iterative
is to complete the robot’s positioning and build a map of the
calculation to make the particles approach the target
surrounding environment simultaneously. This problem used
distribution. The resampling stage determines whether
the joint probability distribution model in probability theory
particles with high-weight values can effectively replace
to describe:
particles with low-weight values. In general, the number
P (x1:t , m|z1:t , u1:t−1 ) (1) of effective particles is used as the standard to measure
the degradation degree of particle weight value, and its
where x1:t represents the pose sequence of the robot at time calculation formula is as follows:
1:t; m represents the map of the robot’s surrounding environ-
1
ment; z1:t represents the sensor measurement data of the robot Neff = PN (4)
(i) )2
at time 1:t; u1:t−1 represents the control data of the robot at i=1 (ω̃
time 1:t-1. Among them, ω̃(i) represents the normalized weight of
That is, on the premise of the current robot sensor measure- particle i, that is, the ratio of the target distribution and the
ment value z1:t and robot control data u1:t−1 , the robot pose proposed distribution. By adopting this constraint method,
state x1:t and the robot’s surrounding environment map m can the number of resampling can be effectively reduced, and the
be expressed by the above formula get. From the relevant particle degradation process can be slowed down.
knowledge of probability theory, deduce the above formula
as follows:
2) CARTOGRAPHER ALGORITHM
P (x1:t , m|z1:t , u1:t−1 ) = P (m|x1:t , z1:t ) ∗ p (x1:t |z1:t , u1:t−1 ) Cartographer [12] is a classical laser SLAM algorithm based
(2) on graph optimization developed by Google. The difference
between this algorithm and the particle filter-based Gmap-
Using Rao-Blackwellized Particle Filtering (RBPF) [11] ping algorithm is that the Cartographer algorithm not only
based on particle filtering, we decomposed the above prob- estimates the mobile robot’s current pose state xt but also
lem into localization and mapping. Among them, the core the trajectory of the entire environment map construction
idea of particle filters is to use particle set characterization process x0:t . The algorithm framework uses two parts of Local
probability. However, the RBPF algorithm has two defects. SLAM and Global SLAM.
The first point is that the algorithm uses many particles, The content of Local SLAM includes: ① use the data of
which will cause a large amount of calculation and memory odometer and IMU to calculate the trajectory, and give the
consumption. The second point is that performing resampling estimated value of the pose of the robot; ② use the estimated
frequently can cause particle degradation. value of the pose of the robot as the initial value, match the
Aiming at Two Defects of the RBPF Algorithm, the data of the lidar, and update The value of the pose estimator;
RBPF-based Gmapping algorithm has made two significant ③ Each frame of lidar data is superimposed after motion

13692 VOLUME 11, 2023


C. Tian et al.: Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping

frames can only obtain sub-optimal pose information, and


uses factor graph optimization to jointly optimize the residual
information of lidar, vision and IMU, so as to obtain pose
trajectory information and create a laser point cloud map.
Meng et al. [18] proposed a tight coupling of monocular
FIGURE 1. Cartographer algorithm framework.
vision methods and lidar odometry to extract 3D features
from lidar and vision information. In this system, the monoc-
ular camera and 3D LIDAR measurement are close together
for joint optimization, which can provide accurate data for
filtering, and then forms a submap (Submap). The content
Six-Degrees of Freedom (6-DOF) pose estimation prepro-
of Global SLAM includes: ① loop detection; ② back-end
cessing, and use the Iterative Closest Point (ICP) method to
optimization, using all sub-graphs to form a complete and
construct closed-loop constraints, and perform global pose
usable map.
optimization to obtain high-frequency and high-precision
pose estimation.
3) HECTORSLAM ALGORITHM
The HectorSLAM algorithm [13] is very straightforward:
to ‘‘align’’ the laser points with the existing map, that is, C. DEEP LEARNING-BASED LOOP CLOSURE DETECTION
Scan matching. Scan matching refers to constructing an error METHOD
function between the current frame and the acquired map data Loop closure detection is an essential link in the visual SLAM
and using the Gauss-Newton method to obtain the optimal system and is an effective method to eliminate accumulated
solution and deviation. The main work of the algorithm is errors. Traditional loop closure detection algorithms usually
to convert the laser points to the raster map. That is to say, use artificially designed features with poor accuracy and
for the matching to be successful, all laser points need to be a large amount of computation. Research on loop closure
transformed into the raster map. detection algorithms based on deep learning has gradually
emerged and performed well in recent years.
B. MULTI-SENSOR SLAM Li et al. [19] proposed a monocular visual odometry
Multi-sensor SLAM usually refers to inputting the informa- system—UnDeepVO. Able to estimate the 6-DoF pose of
tion obtained by various sensors into the SLAM system and a monocular camera and its depth of view by using a deep
using the results obtained by the information fusion algorithm neural network. The system utilizes spatial loss and temporal
as the pose estimation, trajectory estimation, and mapping of loss between stereoscopic image sequences for unsupervised
the mobile robot. training. During testing, the system can perform pose estima-
Hong Kong University of Science and Technology Shen tion and dense depth map estimation on monocular images,
Shaojie et al. [14] proposed an algorithm based on the fusion and what differentiates this system from other model-based or
of monocular vision and IMU Visual-Inertial System Monoc- learning-based monocular visual odometry (VO) methods is
ular (IMU - VINS-Mono). This method can calibrate the that it restores scale during the training phase. Chen et al. [20]
external parameter information between the camera sensor proposed a monocular visual odometry framework based on
and the IMU online, and use the IMU information as the convolutional long short-term memory network (LSTM) and
prior information of the visual odometry, and use the non- convolutional neural network (CNN), called LSTM visual
linear sliding window optimization method to solve the pose odometry (LSTMVO). It uses an unsupervised end-to-end
information. The front-end odometer uses the Kanade-Lucas- deep learning framework to simultaneously estimate the 6-
Tomasi (KLT) optical flow method to track visual features, DoF pose and depth of view of a monocular camera. It effec-
which is one of the classic algorithms for mapping and tively solves the problem that the supervised visual odometry
positioning algorithms based on visual-inertial navigation. method requires high training data and the current unsuper-
Wan et al. [15] used the Error State Kalman Filter (ESKF) vised learning method fails to effectively integrate context
to fuse the IMU information from the strapdown inertial nav- information into pose estimation. Future system optimiza-
igation solution, the lidar odometer information and Global tion can consider extending it to the visual SLAM system,
Navigation Satellite System (GNSS) information based on and reducing the global pose drift by introducing a wider
the prior map. Xue et al. [16] used the fused data of IMU and range of closed-loop constraints. Southeast University Yu
wheel speedometer as the prior value of motion estimation, Yu et al. [21] aimed at the problem that traditional loop-
and then used Extended Kalman Filter (EKF) to fuse the data back detection uses artificially designed features to sharply
of laser odometer and IMU. Wisth et al. [17] proposed an reduce the detection accuracy in complex environments, and
efficient odometry model that fuses vision, lidar, and IMU combined convolutional neural networks with local sensi-
sensors. This method proposes a new method for extracting tive hashing algorithms to propose a loopback algorithm
laser features. By extracting line features and surface features based on deep learning. Detection method. Zhou et al. [22] of
in laser point clouds Meta-features, this method solves the Shandong University proposed a new method based on deep
problem that the traditional point cloud matching between learning-based Local 3D Depth Descriptor (L3D) to solve

VOLUME 11, 2023 13693


C. Tian et al.: Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping

problems related to loop closure detection in SLAM. L3D


is an emerging compact representation of patches extracted
from point clouds learned from data using deep learning
algorithms. The authors propose a new overlap metric for
loop closure detection, a novel approach capable of accu-
rately loop detecting and estimating six degrees of free-
dom poses in the presence of small overlaps. The future FIGURE 2. Overall flow chart of RTAB-MAP.
optimization research direction can make a breakthrough by
fusing 2D visual cues extracted from images with 3D local
descriptors to deal with scenes lacking geometric structures, deep learning-based loop closure detection algorithm applied
and focusing on the optimization of descriptor calculations. in this paper.
Daniele Cattaneo et al. [23] propose a novel loop closure
detection(LCD)Net architecture for loop closure detection A. IMPROVED GMAPPING ALGORITHM
and point cloud registration, which efficiently detects Loop This paper chooses to improve the Gmapping algorithm as
closures in LiDAR point clouds. LCD Net consists of a shared follows:
feature extractor based on a Point-Voxel RCNN (PV-RCNN) Particle filtering is an iterative calculation. Each particle
network, a position recognition head that captures discrimina- carries a map, so corresponding computing power is required
tive global descriptors, and a novel differentiable relative pose to support it. When building a map in a small indoor environ-
head based on imbalanced optimal transfer theory, which can ment, the quality of the map can be guaranteed by optimizing
operate without any Effectively aligning two point clouds the resampling stage, and the particles’ convergence speed
without prior information about their initial misalignment, can be accelerated simultaneously. Particle filtering uses par-
experiments show that this architecture has good generaliza- ticles to approximate the posterior distribution; that is, the
tion ability. more the number of particles in the area, the more likely the
actual state of the robot will fall in this area. In order to better
reflect the actual situation of the robot, a large number of
D. RTAB-MAP
particles is required. So, how many particles are needed in an
The visual SLAM scheme mainly studied in this paper is
actual scene to be considered to reflect reality? The following
based on the RTAB-MAP algorithm [24], which is essen-
calculation formula can be used to calculate the difference
tially a time- and scale-independent graph-based optimiza-
between the estimated probability distribution p() and the
tion algorithm, which focuses on solving the online loop
actual probability distribution q(), namely KL-distance, and
detection problem when running in a large environment.
the formula is specifically:
At present, RTAB-MAP supports the use of various sensors
to complete 2D or 3D SLAM tasks. It is a relatively complete
X p (x)
K (p, q) = p (x) log (5)
and effective SLAM open source library. x q (x)
Its core idea is that when the number of positioning points Usually, the target distribution is unavailable, but the χ 2
in the map makes the time to find a positioning match exceed distribution can approximate the distance between the esti-
a certain set threshold, the RTAB-MAP algorithm transfers mated distribution and the true target distribution. Aiming at
the positioning points in the Working Memory (WM) that the two problems of particle degradation and the increase in
are less likely to form a closed loop to the LTM Long - calculation amount caused by too many particles, this paper
Term Memory (LTM), the transferred positioning point will proposes to complete the resampling stage by alternatively
no longer participate in the next closed-loop detection cal- performing selective resampling and KLD sampling. The
culation, and the Short-Term Memory (STM) module in the improved Gmapping algorithm is described in Table 1 below.
WM module is used to observe the temporal similarity of
consecutive images and update the positioning accordingly B. MULTI-SENSOR FUSION FRAMEWORK
Point weight. There are two key points in the loopback detec- The multi-sensor fusion frameworks studied in this paper are
tion part, that is, the first key point is to take out the anchor as follows:
points from the LTM module and put them back into the WM
module, and the second key point is to transfer the anchor 1) LASER AND ENCODER FUSION SLAM FRAMEWORK
points in the WM module that meet the transfer criteria to the
The SLAM framework based on the fusion of laser and
LTM module. RTAB-MAP uses the EKF-based multi-sensor
encoder is traditional. The following figure shows the spe-
loosely coupled data fusion method. The overall flowchart of
cific framework. The principles of the Gmapping algorithm,
RTAB-MAP is shown in FIG.2.
Cartographer algorithm and HectorSLAM algorithm have
been briefly studied in the previous section [25]. Among
III. METHOD them, only the Cartographer algorithm includes loop closure
This section will elaborate on the improved Gmapping algo- detection. However, it only performs a graph optimization
rithm, the SLAM framework for three sensor fusions, and the of all Submaps at intervals of a certain number of scans.

13694 VOLUME 11, 2023


C. Tian et al.: Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping

TABLE 1. Improved gmapping algorithm. information to complete the SLAM function. In contrast, the
particle filter-based Gmapping algorithm relies heavily on
the information provided by the odometer. In the subsequent
experiments, we use all the sensor information to complete
the mapping experiment.
The framework uses the displacement, acceleration and
angular velocity information provided by the wheel encoder
and the IMU. It fuses the data in a loosely coupled man-
ner through the extended Kalman filter algorithm and then
derives the robot’s pose information through the track. Then,
the pose estimated from the data provided by the lidar scan-
ning frame is also fused by the extended Kalman filter to
estimate the current optimal pose of the robot. Finally, use
this information to complete subsequent mapping and opti-
mization work, and the drawn map is a raster map.

It uses the errors estimator as a constraint to optimize the


estimator, which is challenging to elute with its estimation
to optimize its suspicion. Neither the Gmapping algorithm
nor HectorSLAM has a loop closure detection module [26].
Therefore, the loop closure detection and the subsequent
calculation constraints in FIG.3 are represented by dashed
line graphs.

FIGURE 4. SLAM framework based on the fusion of laser, encoder and


IMU.

3) FOUR TYPES OF SENSOR FUSION SLAM FRAMEWORK


The SLAM framework based on the fusion of vision, laser,
encoder and IMU adds an RGB-D camera [28] based on
the previous section, as shown in FIG.5 for the specific
framework.
The framework is a back-end fusion algorithm, which
essentially perceives the multi-dimensional comprehensive
FIGURE 3. SLAM framework based on the fusion of laser and encoder. data after fusion. The back-end fusion algorithm is loose and
is also called a loosely coupled algorithm [29]. All sensors
The framework mainly uses the data provided by the lidar are independent before the results are obtained, and there
scan frame and the mileage data provided by the wheel is no sensor-to-sensor constraint. The data from the RGB-D
encoder for positioning. Then it estimates the optimal pose camera and 2D lidar are subjected to the corresponding per-
of the mobile robot after fusion with the extended Kalman ception algorithm to obtain the result and then aggregated and
filter. Then, use this information to complete the subsequent fused with the estimation results of the encoder and IMU [30].
mapping and optimization work, in which the drawn map is Then the results obtained after the aggregation and fusion
a raster map [27]. are transmitted to the loop closure detection and graph opti-
mization module [31]. Finally, the system framework outputs
2) LASER, ENCODER AND IMU FUSION SLAM FRAMEWORK OctoMap, point cloud, 2D occupancy raster map, map data,
The SLAM framework based on laser, encoder, and IMU Map Graph and TF [32].
adds the IMU to the system to provide acceleration and angu-
lar velocity information based on the previous section [13], C. A DEEP LEARNING-BASED LOOP CLOSURE DETECTION
as shown in FIG.3 below for a specific framework diagram. TECHNOLOGY ADOPTED
Among the three algorithms studied above, the Cartogra- A deep learning-based loop closure detection algorithm is
pher algorithm and HectorSLAM do not require odometry adopted in this paper, which can improve the accuracy and

VOLUME 11, 2023 13695


C. Tian et al.: Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping

frame of this keyframe to determine the range of frames


in which possibly exists the loop closure is.
• According to the range of frame numbers that may have
loop closures, use the high-dimensional feature vector
of pc to compare with the corresponding feature vector
of the range of frame numbers that may have loopbacks
in Data (H ). If there is a frame whose comparison result
is more excellent than γ , it is determined to form a loop
closure.
Description: Data (L) is a low-dimensional feature vec-
tor database; Data (H ) is a high-dimensional feature vector
database of keyframes; α is a preset value for the possibility
of a preliminary judgment of loop closure; γ is a preset
reliability parameter threshold.
Unified image specification process: ① Unify image spec-
ifications after continuous video information is processed;
② Input 227∗227 and 224∗224 images to AlexNet and
MobileNetV3-Large respectively.
Key frame selection algorithm process:
FIGURE 5. SLAM framework base on the fusion of vision, laser, encoder
and IMU. TABLE 2. Key frame selection algorithm.

efficiency of loop closure detection, eliminate the accumu-


lated error in the process of robot motion, and enhance
the robustness of SLAM in complex environments. The
algorithm includes unified image specification, keyframe
selection, high-dimensional feature vector library and low-
dimensional feature vector library construction, Etc [33].
The schematic diagram of the loop detection algorithm
proposed in this paper is shown in Figure 6, and the specific
algorithm steps are described in Table 2. Among them, high-
dimensional image features are extracted by MobileNetV3-
Large, while low-dimensional image features are extracted
by AlexNet.
The specific steps of the loop closure detection algorithm
are as follows:
• Unify the image specifications of the moving process
images obtained by the RGB-D camera to obtain con-
tinuous images p0 → pn .
• The low-dimensional and high-dimensional feature vec- This deep learning-based loop closure detection algorithm
tors of p0 → pn are simultaneously extracted in parallel, is introduced into the multi-sensor SLAM system to replace
in which low-dimensional feature vectors are extracted the traditional loop closure detection algorithm provided by
by AlexNet, and high-dimensional feature vectors are RTAB-MAP. The introduced framework is shown in FIG.7.
extracted by MobileNetV3-Large, and Data (L) and Due to the limitation of the computing power of the mobile
Data (H ) are established respectively. robot itself, it is necessary to share part of the computing
• Use Data (L) to determine the key frame and its frame pressure on the deep learning server mentioned above and use
number used in loop closure detection through its pro- the topology to disperse the computing pressure.
posed key frame selection algorithm, and record the
high-dimensional feature vector corresponding to the IV. EXPERIMENT AND ANALYSIS
number of keyframe frames. This section is the experiment and analysis part. Above
• Compare the high-dimensional feature vector of the cur- all, the hardware and software platforms required for the
rent frame pc and the keyframe. If it is found that the experiment are introduced. First, the improved Gmapping
similarity result compared with a keyframe is greater algorithm proposed in this paper is compared with the orig-
than α, find the previous key frame and the next key inal algorithm, and the experimental results are analyzed.

13696 VOLUME 11, 2023


C. Tian et al.: Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping

FIGURE 6. Schematic diagram of the deep learning loop detection algorithm proposed in this paper.

motherboard, using the STM32F405 control board as the


motion master to control the motion of the robot. In terms
of sensors, it is equipped with Silan RPLIDAR A1 lidar,
which has a measurement radius of 12M and a scanning
frequency of 16HZ. IMU accelerometer gyroscope sensor is
used to measure the three-axis attitude angle (or angular rate)
and acceleration of the object. 520 encoder geared motor,
accumulating mileage from the moment the robot starts to
move. LeTV LeTMC-520 RGB-D depth camera with RGB
pixels is 1080P, a depth resolution of 640 × 480, and a video
frame rate of 30FPS.

FIGURE 8. Mecanum Kinematics principle of mobile robot.

FIGURE 7. Multi-sensor SLAM framework with loop closure detection All software of the Mecanum robot runs in the Ubuntu
algorithm.
system, and the version used in this paper is Ubuntu18.04.
The robot’s operating system is ROS Melodic, which can run
Second, a comparative experiment of SLAM of the three in the Ubuntu18.04 environment. Moreover, many third-party
multi-sensor fusion frameworks studied above is carried out libraries are used in the experiments, such as Pangolin, Eigen,
on the improved Gmapping algorithm and analyzed the exper- nanoflann, PCL, OpenCV, Octomap, G2O, and Sophus, Etc.
imental results. Finally, a comparison experiment of SLAM After the connection between the PC (Personal Computer)
mapping between the original RTAB-MAP algorithm and side and the robot side is established through the LAN, the
the deep learning-based RTAB-MAP loop closure detection software rviz can be used on the PC to inspect the robot’s
algorithm is carried out, and the experimental results are SLAM process visually, and the software Gazebo can be used
analyzed in detail. to simulate the robot’s running experiments.
The experimental platform is a server for the deep learn-
A. EXPERIMENTAL PLATFORM AND EXPERIMENTAL ing laboratory. The hardware configuration includes two
ENVIRONMENT Intel Xeon processor E5 series CPUs, four RTX 2080 Ti
1) EXPERIMENTAL PLATFORM graphics cards, one 1TB SSD solid state drive, and two
FIG.8 below shows the Mecanum kinematics princi- DDR4 32G memories. The software environment is a 64-
ple mobile robot, equipped with NVIDIA Jetson Nano bit Ubuntu18.04 operating system, which is configured

VOLUME 11, 2023 13697


C. Tian et al.: Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping

with CUDA10.0 and cuDNN7.5 provided by NVIDIA. The Gmapping algorithm in this paper, the particles converge after
PyTorch deep learning framework is used as the basic the robot moves the same distance as the original algorithm
framework of the experiment, and the development tool is experimental process. Then, according to Table 3, it can
Python3.7. be concluded that the particle convergence speed of the
improved Gmapping algorithm in this paper is increased by
2) EXPERIMENTAL ENVIRONMENT 39.85%. To sum up, the experiment proves that the particle
In order to better compare the SLAM effects of different convergence speed of the improved Gmapping algorithm in
multi-sensor fusion frameworks, the experimental environ- this paper is faster than the original algorithm, and a better
ment is selected in a closed room, which can reduce the improvement result is obtained.
impact of drastic environmental changes on the experiment. Yan et al. [34] made two improvements based on the
FIG.9 shows the central part of the experimental environment. Gamping algorithm and conducted related experiments. The
first improvement is to combine the Gmapping algorithm
with AF (Firefly Algorithm) to increase its speed by 7.32%;
the second improvement is to combine the Gmapping algo-
rithm Combining with AF (firefly algorithm) and AS (adap-
tive sampling), the speed is increased by about 14%, and
the improved method proposed in this paper increases the
speed by 39.85%, which fully proves the effectiveness of the
improved method proposed in this paper.

C. MULTI-SENSOR FUSION SLAM EXPERIMENT


The primary purpose of the experiments in this subsection
is to compare the three multi-sensing SLAM frameworks
studied in the previous section. In this part of the experiment,
the improved Gmapping algorithm in this paper is used.
FIGURE 9. The central part of the experimental environment. The primary purpose of the experiments in this subsection
is to compare the three multi-sensing SLAM frameworks
studied in the previous section. In this part of the experiment,
B. IMPROVED GMAPPING ALGORITHM EXPERIMENT the improved Gmapping algorithm in this paper is used.
• The first part is the SLAM experiment of laser and
When using the original and improved algorithms in this
encoder fusion. The experimental results are shown in
paper to conduct a comparative experiment, the displacement
FIG.12(a)(b)(c).
distance is the same, which can ensure that the experimental
• The second part is the SLAM experiment of the fusion
results are more strongly compared. When using the original
of laser, encoder and IMU. The experimental results are
Gmapping algorithm, the state of the particles during the
shown in FIG.13(a)(b)(c).
robot’s forward process is shown in FIG.10(a)(b).
• The third part is the SLAM experiment of the fusion of
When using the improved Gmapping algorithm, the state
vision, laser, encoder and IMU. First, FIG.14(a)(b)(c)
of the particles in the forward process of the robot is shown
is the colour image, depth image and depth-point cloud
in FIG.11(a)(b).
data map captured by the camera. The images captured
The time used to run the original Gmapping algorithm
by these cameras can provide much information for the
and the improved Gmapping algorithm in this paper to make
system to complete the SLAM function. The experimen-
the particles reach the same convergence state is shown in
tal results are shown in FIG.15(a)(b)(c), which shows
Table 2.
three perspectives of the mapping results.
TABLE 3. Time for particles to converge to the same state for the two
First, comparing the experimental results in FIG.12 and
algorithms. FIG.13, it can be concluded from observation that when the
IMU data is not fused, the edge contours of the 2D grid maps
constructed by the three algorithms are blurred and noisy.
The 2D raster map constructed after the system integrates the
IMU data has more apparent edges and reduced noise. Then,
comparing the experimental results of FIG.13 and FIG.14,
after the system integrates the data provided by the RGB-D
camera, it can reconstruct the surrounding scene in 3D based
By observing FIG.10(a)(b), it can be concluded that when on building a 2D grid map. The map constructed in this way
the Gmapping algorithm is used, the state of the particles has can contain more environmental information.
roughly converged after the robot moves about two map units. It can be seen from the horizontal comparison experi-
When again observed in Fig. 11(a)(b) using the improved mental results that the edge contour of the map constructed

13698 VOLUME 11, 2023


C. Tian et al.: Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping

FIGURE 10. Particle state of the original algorithm. (a) The particle state at the initial position of the robot in the
original algorithm; (b) The particle state after the robot moves forward in the original algorithm.

FIGURE 11. Particle state of the improved algorithm. (a) The particle state at the initial position of the robot in the
improved algorithm; (b) The particle state after the robot moves forward in the improved algorithm.

FIGURE 12. Experimental results of laser and encoder fusion SLAM. (a)Gmapping experimental result; (b)Cartographer experimental result;
(c) HectorSLAM experimental result.

by the Cartographer algorithm based on graph optimization In addition, from the experimental results, it can be con-
is relatively blurred. The construction of the HectorSLAM cluded that the information obtained by the 2D lidar, encoder
algorithm will contain more noise at the edges of the map. and IMU can only build a 2D grid map and cannot build a 3D
The Gmapping algorithm, with more explicit map boundaries environment map. Moreover, because the laser-based multi-
and less noise, is better than the HectorSLAM algorithm sensor SLAM system is challenging to complete the loop
and the Cartographer algorithm in the performance of indoor closure detection, it will lead to poor global consistency of
mapping. the constructed map. According to the experimental analysis,

VOLUME 11, 2023 13699


C. Tian et al.: Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping

FIGURE 13. Experimental results of laser and encoder fusion SLAM. (a)Gmapping experimental result; (b)Cartographer experimental result;
(c) HectorSLAM experimental result.

FIGURE 14. Image captured by the camera. (a) Color image; (b) Depth image; (c)Depth point cloud data graph; (d) Depth-point cloud data graph
at runtime.

it can be concluded that the overall performance of the SLAM include the Precision-Recall (P-R) curve and the Average
system integrated with vision, laser, encoder and IMU is Precision, (AP).
better. In this section, AlexNet is selected as the network
for extracting low-dimensional features, and MobileNetV3-
D. COMPARATIVE EXPERIMENT OF LOOP CLOSURE Large is used as the network for extracting high-dimensional
DETECTION ALGORITHM BASED ON DEEP LEARNING features. Both networks are pre-trained on ImageNet. Use the
The loop closure detection experiment in this paper uses cosine similarity to judge the similarity of the output features
the dataset CityCentre from Oxford University, a total to perform loop closure detection. The experimental results
of 2474 outdoor scene images, the size is 640∗ 480, show that when α = 0.8, β = 0.6, γ = 0.95, the accuracy of
and the format is.jpg. The indicators for evaluating the loop closure detection is the highest, and the detection effi-
loop closure detection algorithm in this paper mainly ciency is improved with the reduction of calculation amount.

13700 VOLUME 11, 2023


C. Tian et al.: Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping

FIGURE 17. Experimental results using the CityCentre dataset.

FIGURE 15. Experimental results of vision, laser, encoder and IMU fusion
SLAM. (a) Due south; (b) Diagonal direction; (c) Due east.

FIGURE 16. Line diagram of loop closure detection experiment.

FIGURE 18. RTAB-MAP mapping result. (a) Due south; (b) Diagonal
direction; (c) Due east.
In the comparison experiment, the traditional loop closure
detection algorithm is the SIFT-based Bag of Words (BoW)
algorithm, and the deep learning-based loop closure detection
algorithm includes AlexNet, VGG19, ResNet32 and the deep rate shown in Table 4, the accuracy rate of the algorithm
learning-based loop closure detection algorithm proposed in proposed in this paper is 31.26% higher than that of the
this paper. By observing the P-R curve shown in FIG.17, traditional algorithm BoW; compared with the deep learning-
it can be seen that when the recall rate is lower than 0.4, based AlexNet algorithm, VGG19 algorithm, ResNet32 algo-
the accuracy of the algorithm proposed in this paper is 1. rithm, the accuracy rates Increased by 14.21%, 3.05%, and
In comparison, the loop closure detection algorithm proposed 1.56%. In contrast, the overall performance of the loop clo-
in this paper has the best effect. From the average accuracy sure detection algorithm proposed in this paper is the best.

VOLUME 11, 2023 13701


C. Tian et al.: Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping

after using the loop closure detection algorithm proposed in


this paper to replace the loop closure detection algorithm in
RTAB-MAP, it can be clearly observed that the constructed
Environment maps are more regular. FIG.19 does not produce
the distortion at the red circle in FIG.18, which means that a
globally consistent map can be better constructed.
Moreover, the environment map constructed by the latter
can reproduce added environmental information.

V. CONCLUSION
In this paper, we propose a multi-sensor fusion SLAM algo-
rithm framework based on improved Gmapping. Through
theoretical research and experimental verification, it is proved
that the framework applied to mobile robots can make it
work indoors with high robustness and precision. We see
that the SLAM system integrated by vision, laser, encoder
and IMU is generally more stable and accurate. Secondly,
the improved Gmapping algorithm is significantly better than
the Cartographer algorithm and HectorSLAM algorithm in
terms of indoor mapping performance, and the particle con-
vergence speed is 39.85% higher than the original Gmapping
algorithm, which significantly reduces the memory and cal-
culation amount occupied by the algorithm. Finally, the loop
closure detection algorithm proposed in this paper is superior
to the traditional algorithm BoW, AlexNet algorithm, VGG19
algorithm, and ResNet32 algorithm in terms of accuracy, and
is superior to the original RTAB-MAP algorithm in terms of
the construction effect of the environment map.
To address the shortcomings of this paper in future
FIGURE 19. RTAB-MAP mapping results after replacing the loopback research, first, we will test our system in more complex
detection algorithm. (a) Due south; (b) Diagonal direction; (c) Due east. scenarios, and improve the test analysis to compare more
different algorithms to achieve better results. Second, a multi-
E. TWO TYPES OF LOOP CLOSURE DETECTION geometry-based dynamic object detection method will be
ALGORITHM’s COMPARISON EXPERIMENTS added to cooperate with image processing algorithms to fur-
Run RTAB-MAP original algorithm on the experimental plat- ther improve the adaptability and robustness of the system
form for the SLAM mapping experiment, and the results are in dynamic environments. Third, on this basis, explore the
shown in FIG.18. multi-sensor fusion SLAM and multi-frame fusion Gmap-
ping algorithm based on the tightly coupled method of
TABLE 4. Experimental algorithm loop closure detection accuracy. extended Kalman filtering to further improve the perfor-
mance of SLAM.

REFERENCES
[1] M. Yang, ‘‘Overview on issues and solutions of SLAM for mobile robot,’’
Comput. Syst. Appl., vol. 27, no. 7, pp. 1–10, Jul. 2018.
[2] M. Labbé and F. Michaud, ‘‘RTAB-map as an open-source LiDAR and
visual simultaneous localization and mapping library for large-scale and
long-term online operation,’’ J. Field Robot., vol. 36, no. 2, pp. 416–446,
2019.
[3] W. Hess, D. Kohler, H. Rapp, and D. Andor, ‘‘Real-time loop closure
in 2D LiDAR SLAM,’’ in Proc. IEEE Int. Conf. Robot. Autom. (ICRA),
In the experimental platform, the loop closure detection May 2016, pp. 1271–1278.
algorithm introduced above is used to replace the traditional [4] J. Zhang and S. Singh, ‘‘Visual-LiDAR odometry and mapping: Low-
loop closure detection algorithm provided by RTAB-MAP drift, robust, and fast,’’ in Proc. IEEE Int. Conf. Robot. Autom. (ICRA),
May 2015, pp. 2174–2181.
to conduct the SLAM mapping experiment. The results are [5] X. Wang, ‘‘Mobile robot for SLAM research based on LiDAR and binocu-
shown in FIG.19. lar vision fusion,’’ Chin. J. Sensors Actuators, vol. 31, no. 3, pp. 394–399,
RTAB-MAP can build a 3D map of the surrounding envi- Mar. 2018.
[6] K. Konolige, G. Grisetti, R. Kümmerle, W. Burgard, B. Limketkai, and
ronment. Comparing the experimental results, at the red cir- R. Vincent, ‘‘Efficient sparse pose adjustment for 2D mapping,’’ in Proc.
cle mark in FIG.18 and the green circle mark in FIG.19, IEEE/RSJ Int. Conf. Intell. Robots Syst., Oct. 2010, pp. 22–29.

13702 VOLUME 11, 2023


C. Tian et al.: Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping

[7] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardós, ‘‘ORB-SLAM: [31] J. Yang, D. Guo, K. Li, Z. Wu, and Y.-K. Lai, ‘‘Global 3D non-rigid
A versatile and accurate monocular SLAM system,’’ IEEE Trans. Robot., registration of deformable objects using a single RGB-D camera,’’ IEEE
vol. 31, no. 5, pp. 1147–1163, Oct. 2015. Trans. Image Process., vol. 28, no. 10, pp. 4746–4761, Oct. 2019.
[8] S. Kumar and R. M. Hegde, ‘‘Multi-sensor data fusion methods for [32] J.-H. Kim, C. Cadena, and I. Reid, ‘‘Direct semi-dense SLAM for
indoor localization under collinear ambiguity,’’ Pervasive Mobile Comput., rolling shutter cameras,’’ in Proc. IEEE Int. Conf. Robot. Autom. (ICRA),
vol. 30, pp. 18–31, Aug. 2016. May 2016, pp. 1308–1315.
[9] X. Ding, Y. Wang, D. Li, L. Tang, H. Yin, and R. Xiong, ‘‘Laser map aided [33] H. Li, C. Tian, L. Wang, and H. Lv, ‘‘A loop closure detection method based
visual inertial localization in changing environment,’’ in Proc. IEEE/RSJ on semantic segmentation and convolutional neural network,’’ in Proc. Int.
Int. Conf. Intell. Robots Syst. (IROS), Oct. 2018, pp. 4794–4801. Conf. Artif. Intell. Electromech. Autom. (AIEA), May 2021, pp. 269–272.
[10] Y. Sun, M. Liu, and M. Q.-H. Meng, ‘‘Active perception for foreground seg- [34] Y. Han, W. Wei, C. Jinhua, D. Didi, and W. Rujia, ‘‘Research on SLAM
mentation: An RGB-D data-based background modeling method,’’ IEEE Gmapping based on AF and AS algorithm optimization,’’ J. Jiangsu Inst.
Trans. Autom. Sci. Eng., vol. 16, no. 4, pp. 1596–1609, Oct. 2019. Technol., vol. 28, no. 2, pp. 93–101, 2022.
[11] L. Zhang, L. Wei, P. Shen, W. Wei, G. Zhu, and J. Song, ‘‘Semantic SLAM
based on object detection and improved octomap,’’ IEEE Access, vol. 6,
pp. 75545–75559, 2018. CHENGJUN TIAN received the Doctor of Engi-
[12] R. Mur-Artal and J. D. Tardós, ‘‘ORB-SLAM2: An open-source slam neering degree from the Changchun University
system for monocular, stereo, and RGB-D cameras,’’ IEEE Trans. Robot., of Science and Technology, in 2011. He is cur-
vol. 33, no. 5, pp. 1255–1262, Oct. 2017. rently an Associate Professor with the Changchun
[13] B. Bescos, J. M. Fácil, J. Civera, and J. L. Neira, ‘‘DynaSLAM: Tracking, University of Science and Technology. His cur-
mapping, and inpainting in dynamic scenes,’’ IEEE Robot. Autom. Lett., rent research interests include pattern recogni-
vol. 3, no. 4, pp. 4076–4083, Oct. 2018. tion and intelligent systems. He is also a member
[14] T. Qin, P. Li, and S. Shen, ‘‘VINS-Mono: A robust and versatile monoc- of the Education and Training Committee of the
ular visual-inertial state estimator,’’ IEEE Trans. Robot., vol. 34, no. 4, China Simulation Society, the Director of the Jilin
pp. 1004–1020, Aug. 2018. Province Automation Society, and the Director of
[15] G. Wan, X. Yang, R. Cai, H. Li, Y. Zhou, H. Wang, and S. Song, ‘‘Robust the Jilin Province Robotics Society.
and precise vehicle localization based on multi-sensor fusion in diverse
city scenes,’’ in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), May 2018,
pp. 4670–4677.
HAOBO LIU received the bachelor’s degree from
[16] H. Xue, H. Fu, and B. Dai, ‘‘IMU-aided high-frequency LiDAR odometry the Jilin Institute of Chemical Technology, Jilin,
for autonomous driving,’’ Appl. Sci., vol. 9, no. 7, p. 1506, Apr. 2019.
China, in 2021. He is currently pursuing the mas-
[17] D. Wisth, ‘‘Unified multi-modal landmark tracking for tightly coupled
ter’s degree with the Changchun University of Sci-
LiDAR-visual-inertial odometry,’’ IEEE Robot. Autom. Lett., vol. 6, no. 2,
ence and Technology, Jilin. His current research
pp. 1255–1262, Apr. 2021.
interests include SLAM and computer vision.
[18] L. Meng, C. Ye, and W. Lin, ‘‘A tightly coupled monocular visual
LiDAR odometry with loop closure,’’ Intell. Service Robot., vol. 15, no. 1,
pp. 129–141, Mar. 2022.
[19] R. Li, S. Wang, Z. Long, and D. Gu, ‘‘UnDeepVO: Monocular visual
odometry through unsupervised deep learning,’’ in Proc. IEEE Int.
Conf. Robot. Autom. (ICRA), Brisbane, QLD, Australia, May 2018,
pp. 7286–7291. ZHE LIU received the bachelor’s degree from
[20] C. Zonghai, ‘‘Monocular visual odometer based on recurrent convolutional the Jiangxi University of Science and Technology,
neural network,’’ Robotics, vol. 41, no. 2, pp. 147–155, 2019. Jiangxi, China, in 2021. She is currently pursuing
[21] Y. Yu, ‘‘A loop closure detection method for visual SLAM based on deep the master’s degree with the Changchun Univer-
learning,’’ Comput. Eng. Des., vol. 41, no. 2, pp. 529–536, 2020. sity of Science and Technology, Jilin, China. Her
[22] Y. Zhou, Y. Wang, F. Poiesi, Q. Qin, and Y. Wan, ‘‘Loop closure detection current research interests include SLAM and com-
using local 3D deep descriptors,’’ IEEE Robot. Autom. Lett., vol. 7, no. 3, puter vision.
pp. 6335–6342, Jul. 2022.
[23] D. Cattaneo, M. Vaghi, and A. Valada, ‘‘LCDNet: Deep loop closure
detection and point cloud registration for LiDAR SLAM,’’ IEEE Trans.
Robot., vol. 38, no. 4, pp. 2074–2093, Aug. 2022.
[24] S. Das, ‘‘Simultaneous localization and mapping (SLAM) using HONGYANG LI received the bachelor’s degree
RTAB-MAP,’’ 2018, arXiv:1809.02989. from the Tianjin University of Science and Tech-
[25] J. Civera and S. H. Lee, ‘‘RGB-D odometry and SLAM,’’ in RGB-D nology, Tianjin, China, in 2016, and the master’s
Image Analysis and Processing. Cham, Switzerland: Springer, Oct. 2019, degree from the Changchun University of Science
pp. 117–144.
and Technology, Jilin, China, in 2021. He is cur-
[26] M. Hsiao, E. Westman, G. Zhang, and M. Kaess, ‘‘Keyframe-based dense rently engaged in SLAM and NLP related work.
planar SLAM,’’ in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Singapore,
May 2017, pp. 5110–5117.
[27] F. Steinbrucker, C. Kerl, D. Cremers, and J. Sturm, ‘‘Large-scale multi-
resolution surface reconstruction from RGB-D sequences,’’ in Proc.
IEEE Int. Conf. Comput. Vis., Sydney, NSW, Australia, Dec. 2013,
pp. 3264–3271.
[28] Y. Sun, M. Liu, and M. Q.-H. Meng, ‘‘Improving RGB-D SLAM in YUYU WANG received the bachelor’s degree
dynamic environments: A motion removal approach,’’ Robot. Auton. Syst., in engineering from the School of Electronic
vol. 89, pp. 110–122, Mar. 2017. Information Engineering, Changchun University
[29] M. Klingensmith, S. S. Sirinivasa, and M. Kaess, ‘‘Articulatedrobotmotion of Science and Technology, Changchun, China,
for simultaneous localization and mapping (ARM-SLAM),’’ IEEE Robot. in 2020. He is currently pursuing the master’s
Autom. Lett., vol. 1, no. 2, pp. 1156–1163, Jul. 2016. degree in control science and engineering. His
[30] R. Scona, M. Jaimez, Y. R. Petillot, M. Fallon, and D. Cremers, ‘‘Static- current research interests include computer vision
Fusion: Background reconstruction for dense RGB-D SLAM in dynamic and motion planning of robotic arms.
environments,’’ in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Brisbane,
QLD, Australia, May 2018, pp. 1–9.

VOLUME 11, 2023 13703

You might also like