Research On Multi-Sensor Fusion SLAM Algorithm Based On Improved Gmapping
Research On Multi-Sensor Fusion SLAM Algorithm Based On Improved Gmapping
Research On Multi-Sensor Fusion SLAM Algorithm Based On Improved Gmapping
ABSTRACT Simultaneous Localization and Mapping (SLAM) is the core technology of the intelligent robot
system, and it is also the basis for its autonomous movement. In recent years, it has been found that SLAM
using a single sensor has certain limitations, such as Inertial Measurement Unit (IMU) noise and serious
drift, and 2D radar can only detect environmental information on the same horizontal plane. In this regard,
this paper constructs a multi-sensor back-end fusion SLAM algorithm that combines vision, laser, encoder
and IMU information. Experiments have proved that compared with using a single sensor, the application
of a multi-sensor fusion system makes the edges of the constructed map clearer and the noise reduced.
Aiming at the problem of increased calculation caused by particle degradation and too many particles, this
paper improves the Gmappping algorithm, and uses the combination of selective resampling and Kullback-
Leibler Distance (KLD) sampling to complete resampling. It has been proved by experiments that compared
with the original algorithm of Gmapping, the application of the improved algorithm increases the particle
convergence speed by 39.85% in the process of indoor mapping. Aiming at the problems that the traditional
loop detection algorithm is easily affected by environmental factors, resulting in low detection accuracy, and
the loop detection algorithm based on deep convolutional neural network has a large amount of calculation
and takes a long time to detect. The main research of this paper is to apply a deep learning-based loop
detection algorithm on the multi-sensor fusion framework, and use the combination of high-dimensional and
low-dimensional features of the image for loop detection. This paper uses different algorithms to conduct
comparative experiments on the dataset CityCentre. The experimental results show that compared with the
traditional algorithms Bag of Words (BoW), AlexNet algorithm, VGG19 algorithm, and ResNet32 algorithm,
the accuracy of the algorithm proposed in this paper has increased by 31.26%, 14.21%, 3.05%, and 1.56%,
respectively. In addition, the comparison experiment results of SLAM mapping with the original Real-Time
Appearance-Based Mapping (RTAB-MAP) algorithm prove that the loop closure detection algorithm based
on deep learning proposed in this paper can enable the system to better build a globally consistent map,
including more environmental information.
INDEX TERMS Laser SLAM, Gmapping algorithm, multi-sensor fusion SLAM system, loop closure
detection.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
13690 VOLUME 11, 2023
C. Tian et al.: Research on Multi-Sensor Fusion SLAM Algorithm Based on Improved Gmapping
to eliminate accumulated errors. Traditional loop closure alternatively performs selective resampling and Kullback-
detection algorithms usually use artificially designed fea- Leibler Distance (KLD) sampling.In summary, this paper
tures, and the problems mainly concentrate on unsatisfactory studies three multi-sensor fusion SLAM frameworks of laser
accuracy and a large amount of calculation. In recent years, and encoder; laser, encoder, and Inertial Measurement Unit
the research of loop closure detection algorithms based on (IMU) [8]; laser, encoder, IMU and vision, and conducts
deep learning has gradually emerged and has shown excellent related experiments.
performance. In summary, this paper considers integrating the This paper contributes to practical applications. On the one
deep learning-based loop closure detection algorithm into the hand, the Gmapping [9] algorithm can build indoor maps in
multi-sensor SLAM system, which could improve the sys- real time, and requires less calculation and higher precision
tem’s overall performance. Therefore, this paper introduces a to build small scene maps. However, as the number of par-
deep learning-based loop closure detection algorithm into the ticles required increases as the scene grows, because each
multi-sensor fusion SLAM system. We used this loop closure particle carries a map, the amount of memory and calcula-
detection method to replace the traditional loopback detection tion required to construct a large map will increase, and the
algorithm provided by Real-Time Appearance-Based Map- convergence speed will become slower and slower. Through
ping (RTAB-MAP) [2] and respectively conducted SLAM the improved method in this paper, the particle convergence
mapping experiments. speed is increased, and the shortcomings of the Gmapping
Although the lidar used in laser SLAM has the advantages algorithm in large scenes are effectively overcome. And for
of high measurement accuracy, fast speed, and a simple mea- indoor planar motion robots (such as sweepers, food delivery
surement error model, it has obvious shortcomings. 2D radar robots, etc.), the effect will be better, and the particle conver-
[3] can only detect environmental information on the same gence speed will be faster, which can improve the cleaning
level, and the cost of 3D lidar is too high. For other sensors, speed of the sweeper and the food delivery rate. Bring better
although the price of IMU is low and the frequency is high, user experience to users. On the other hand, single sensors
it has high noise and severe drift. The wheel encoder has have certain limitations in application. Single-sensor SLAM
a significant error when the wheel slips or runs on uneven systems based on lidar sensors often fail in unstructured
ground; When the camera cannot obtain depth information, scenarios such as long corridors or flat open spaces. The
there is a significant deviation in the estimation. Only using a performance of vision-based single-sensor systems is sen-
single sensor can only meet the needs of a specific scene and sitive to initialization, light intensity, and changes. In this
cannot adequately deal with complex environments. There- paper, the fusion of lidar, camera and IMU can overcome such
fore, multi-sensor SLAM technology [4] has become a hot problems, and effectively improve the robustness, accuracy
spot in the field of SLAM research. This technology inputs and reliability of the system. More and more multi-sensor
the information obtained by various sensors into the SLAM fusion frameworks are applied to the field of robotics and
system, and uses the information fusion algorithm to use the autonomous driving.
information fusion results as the pose estimation, trajectory In summary, the main contributions of this paper are as
estimation and mapping of the mobile robot [5], so that the follows:
mobile robot has a multi-dimensional environment. Informa- • The particle filter-based Gmapping algorithm has two
tion acquisition capability, able to implement a robust SLAM main problems: particle degradation and excessive num-
system in complex environments. ber of particles leading to an increase in the amount of
The classic laser SLAM [6], [7] algorithms include the calculation. This paper makes the following improve-
Gmapping algorithm, Cartographer algorithm, and Hec- ments to the Gmapping algorithm for the above two
torSLAM algorithm. Among them, the Gmapping algorithm problems. Selective resampling and KLD sampling are
based on particle filter has the following advantages: alternately carried out. resampling stage. The exper-
• Ability to build indoor maps in real-time; iment proves that the particle convergence speed of
• When building a small scene map, the amount of com- the improved Gmapping algorithm in this paper has
putation required is small, and the accuracy is high; increased by 39.85%, and a better improvement result
• Effective use of odometer information, Etc. has been obtained.
However, the disadvantages of the Gmapping Algorithm • Aiming at the limitations of a single sensor or a small
will lead to particle degradation, the use of excessive par- number of sensor fusions, this paper studies three types
ticles, and an increase in the amount of calculation. The of multi-sensor fusion SLAM frameworks. The exper-
disadvantages are as follows: iments have fully demonstrated that the overall perfor-
• Need information from odometer; mance of the SLAM system with the fusion of vision,
• Not suitable for drones and rough terrain; laser, encoder and IMU [10] is superior.
• No loop closure detection module; • As an important part of the visual SLAM system, loop
• When building a large scene, there are too many particles detection is especially important for the SLAM system
and a massive amount of calculation. of multi-sensor fusion. In this paper, instead of the origi-
Therefore, this paper improves the Gmapping algorithm nal traditional loop detection algorithm, a loop detection
and changes the original sampling method to a method that algorithm based on deep learning is introduced into the
multi-sensor fusion SLAM system. Experiments have improvements to the RBPF algorithm. On the one hand,
proved that the improved system can better construct a an improved proposal distribution is adopted to reduce the
globally consistent map. number of particles used during the run. On the other hand,
The rest of this article is arranged as follows: the sec- selective resampling is used to reduce the resampling fre-
ond part mainly introduces the related work, including the quency.
classic algorithm of laser SLAM, the current situation of • Improved proposal distribution
multi-sensor fusion, Etc.; the third part mainly introduces the Sampling can be done from a distribution to obtain the
improved Gmapping method, multi-sensor fusion theory and pose estimation of the robot at the next moment, which
the loop closure applied in this paper Detection method; The is the proposed distribution. The proposed distribution is
fourth part mainly introduces the relevant experiments and just a surrogate for the target distribution. The difference
the analysis of the experimental results in this paper, and the between the two is the weight values of the particles.
fifth part is the conclusion. In order to solve the problem of too many particles
and particle degradation, the Gmapping algorithm pro-
II. RELATED WORK poses to use the latest lidar observation data to improve
A SLAM system that uses only lidar or a fusion of other the proposal distribution, then the proposal distribution
sensors with lidar is called a laser SLAM system. Classic laser becomes:
SLAM algorithms include: Gmapping algorithm, Cartogra-
(i) (i)
pher algorithm and Hector SLAM. A visual SLAM system P xt |mt−1 , xt−1 , zt , ut−1
uses a camera as the main sensor. The visual SLAM scheme (i) (i)
P zt |mt−1 , xt P(xt |xt−1 , ut−1 )
mainly studied in this paper is the RTAB-MAP algorithm. = (3)
(i) (i)
P(zt |mt−1 , x t−1 , ut−1 )
A. CLASSIC ALGORITHM OF LASER SLAM
1) GMAPPING ALGORITHM
• Selective resampling
Resampling also determines the performance of parti-
As we all know, the basic problem that SLAM needs to solve
cle filtering. The Gmapping algorithm uses an iterative
is to complete the robot’s positioning and build a map of the
calculation to make the particles approach the target
surrounding environment simultaneously. This problem used
distribution. The resampling stage determines whether
the joint probability distribution model in probability theory
particles with high-weight values can effectively replace
to describe:
particles with low-weight values. In general, the number
P (x1:t , m|z1:t , u1:t−1 ) (1) of effective particles is used as the standard to measure
the degradation degree of particle weight value, and its
where x1:t represents the pose sequence of the robot at time calculation formula is as follows:
1:t; m represents the map of the robot’s surrounding environ-
1
ment; z1:t represents the sensor measurement data of the robot Neff = PN (4)
(i) )2
at time 1:t; u1:t−1 represents the control data of the robot at i=1 (ω̃
time 1:t-1. Among them, ω̃(i) represents the normalized weight of
That is, on the premise of the current robot sensor measure- particle i, that is, the ratio of the target distribution and the
ment value z1:t and robot control data u1:t−1 , the robot pose proposed distribution. By adopting this constraint method,
state x1:t and the robot’s surrounding environment map m can the number of resampling can be effectively reduced, and the
be expressed by the above formula get. From the relevant particle degradation process can be slowed down.
knowledge of probability theory, deduce the above formula
as follows:
2) CARTOGRAPHER ALGORITHM
P (x1:t , m|z1:t , u1:t−1 ) = P (m|x1:t , z1:t ) ∗ p (x1:t |z1:t , u1:t−1 ) Cartographer [12] is a classical laser SLAM algorithm based
(2) on graph optimization developed by Google. The difference
between this algorithm and the particle filter-based Gmap-
Using Rao-Blackwellized Particle Filtering (RBPF) [11] ping algorithm is that the Cartographer algorithm not only
based on particle filtering, we decomposed the above prob- estimates the mobile robot’s current pose state xt but also
lem into localization and mapping. Among them, the core the trajectory of the entire environment map construction
idea of particle filters is to use particle set characterization process x0:t . The algorithm framework uses two parts of Local
probability. However, the RBPF algorithm has two defects. SLAM and Global SLAM.
The first point is that the algorithm uses many particles, The content of Local SLAM includes: ① use the data of
which will cause a large amount of calculation and memory odometer and IMU to calculate the trajectory, and give the
consumption. The second point is that performing resampling estimated value of the pose of the robot; ② use the estimated
frequently can cause particle degradation. value of the pose of the robot as the initial value, match the
Aiming at Two Defects of the RBPF Algorithm, the data of the lidar, and update The value of the pose estimator;
RBPF-based Gmapping algorithm has made two significant ③ Each frame of lidar data is superimposed after motion
TABLE 1. Improved gmapping algorithm. information to complete the SLAM function. In contrast, the
particle filter-based Gmapping algorithm relies heavily on
the information provided by the odometer. In the subsequent
experiments, we use all the sensor information to complete
the mapping experiment.
The framework uses the displacement, acceleration and
angular velocity information provided by the wheel encoder
and the IMU. It fuses the data in a loosely coupled man-
ner through the extended Kalman filter algorithm and then
derives the robot’s pose information through the track. Then,
the pose estimated from the data provided by the lidar scan-
ning frame is also fused by the extended Kalman filter to
estimate the current optimal pose of the robot. Finally, use
this information to complete subsequent mapping and opti-
mization work, and the drawn map is a raster map.
FIGURE 6. Schematic diagram of the deep learning loop detection algorithm proposed in this paper.
FIGURE 7. Multi-sensor SLAM framework with loop closure detection All software of the Mecanum robot runs in the Ubuntu
algorithm.
system, and the version used in this paper is Ubuntu18.04.
The robot’s operating system is ROS Melodic, which can run
Second, a comparative experiment of SLAM of the three in the Ubuntu18.04 environment. Moreover, many third-party
multi-sensor fusion frameworks studied above is carried out libraries are used in the experiments, such as Pangolin, Eigen,
on the improved Gmapping algorithm and analyzed the exper- nanoflann, PCL, OpenCV, Octomap, G2O, and Sophus, Etc.
imental results. Finally, a comparison experiment of SLAM After the connection between the PC (Personal Computer)
mapping between the original RTAB-MAP algorithm and side and the robot side is established through the LAN, the
the deep learning-based RTAB-MAP loop closure detection software rviz can be used on the PC to inspect the robot’s
algorithm is carried out, and the experimental results are SLAM process visually, and the software Gazebo can be used
analyzed in detail. to simulate the robot’s running experiments.
The experimental platform is a server for the deep learn-
A. EXPERIMENTAL PLATFORM AND EXPERIMENTAL ing laboratory. The hardware configuration includes two
ENVIRONMENT Intel Xeon processor E5 series CPUs, four RTX 2080 Ti
1) EXPERIMENTAL PLATFORM graphics cards, one 1TB SSD solid state drive, and two
FIG.8 below shows the Mecanum kinematics princi- DDR4 32G memories. The software environment is a 64-
ple mobile robot, equipped with NVIDIA Jetson Nano bit Ubuntu18.04 operating system, which is configured
with CUDA10.0 and cuDNN7.5 provided by NVIDIA. The Gmapping algorithm in this paper, the particles converge after
PyTorch deep learning framework is used as the basic the robot moves the same distance as the original algorithm
framework of the experiment, and the development tool is experimental process. Then, according to Table 3, it can
Python3.7. be concluded that the particle convergence speed of the
improved Gmapping algorithm in this paper is increased by
2) EXPERIMENTAL ENVIRONMENT 39.85%. To sum up, the experiment proves that the particle
In order to better compare the SLAM effects of different convergence speed of the improved Gmapping algorithm in
multi-sensor fusion frameworks, the experimental environ- this paper is faster than the original algorithm, and a better
ment is selected in a closed room, which can reduce the improvement result is obtained.
impact of drastic environmental changes on the experiment. Yan et al. [34] made two improvements based on the
FIG.9 shows the central part of the experimental environment. Gamping algorithm and conducted related experiments. The
first improvement is to combine the Gmapping algorithm
with AF (Firefly Algorithm) to increase its speed by 7.32%;
the second improvement is to combine the Gmapping algo-
rithm Combining with AF (firefly algorithm) and AS (adap-
tive sampling), the speed is increased by about 14%, and
the improved method proposed in this paper increases the
speed by 39.85%, which fully proves the effectiveness of the
improved method proposed in this paper.
FIGURE 10. Particle state of the original algorithm. (a) The particle state at the initial position of the robot in the
original algorithm; (b) The particle state after the robot moves forward in the original algorithm.
FIGURE 11. Particle state of the improved algorithm. (a) The particle state at the initial position of the robot in the
improved algorithm; (b) The particle state after the robot moves forward in the improved algorithm.
FIGURE 12. Experimental results of laser and encoder fusion SLAM. (a)Gmapping experimental result; (b)Cartographer experimental result;
(c) HectorSLAM experimental result.
by the Cartographer algorithm based on graph optimization In addition, from the experimental results, it can be con-
is relatively blurred. The construction of the HectorSLAM cluded that the information obtained by the 2D lidar, encoder
algorithm will contain more noise at the edges of the map. and IMU can only build a 2D grid map and cannot build a 3D
The Gmapping algorithm, with more explicit map boundaries environment map. Moreover, because the laser-based multi-
and less noise, is better than the HectorSLAM algorithm sensor SLAM system is challenging to complete the loop
and the Cartographer algorithm in the performance of indoor closure detection, it will lead to poor global consistency of
mapping. the constructed map. According to the experimental analysis,
FIGURE 13. Experimental results of laser and encoder fusion SLAM. (a)Gmapping experimental result; (b)Cartographer experimental result;
(c) HectorSLAM experimental result.
FIGURE 14. Image captured by the camera. (a) Color image; (b) Depth image; (c)Depth point cloud data graph; (d) Depth-point cloud data graph
at runtime.
it can be concluded that the overall performance of the SLAM include the Precision-Recall (P-R) curve and the Average
system integrated with vision, laser, encoder and IMU is Precision, (AP).
better. In this section, AlexNet is selected as the network
for extracting low-dimensional features, and MobileNetV3-
D. COMPARATIVE EXPERIMENT OF LOOP CLOSURE Large is used as the network for extracting high-dimensional
DETECTION ALGORITHM BASED ON DEEP LEARNING features. Both networks are pre-trained on ImageNet. Use the
The loop closure detection experiment in this paper uses cosine similarity to judge the similarity of the output features
the dataset CityCentre from Oxford University, a total to perform loop closure detection. The experimental results
of 2474 outdoor scene images, the size is 640∗ 480, show that when α = 0.8, β = 0.6, γ = 0.95, the accuracy of
and the format is.jpg. The indicators for evaluating the loop closure detection is the highest, and the detection effi-
loop closure detection algorithm in this paper mainly ciency is improved with the reduction of calculation amount.
FIGURE 15. Experimental results of vision, laser, encoder and IMU fusion
SLAM. (a) Due south; (b) Diagonal direction; (c) Due east.
FIGURE 18. RTAB-MAP mapping result. (a) Due south; (b) Diagonal
direction; (c) Due east.
In the comparison experiment, the traditional loop closure
detection algorithm is the SIFT-based Bag of Words (BoW)
algorithm, and the deep learning-based loop closure detection
algorithm includes AlexNet, VGG19, ResNet32 and the deep rate shown in Table 4, the accuracy rate of the algorithm
learning-based loop closure detection algorithm proposed in proposed in this paper is 31.26% higher than that of the
this paper. By observing the P-R curve shown in FIG.17, traditional algorithm BoW; compared with the deep learning-
it can be seen that when the recall rate is lower than 0.4, based AlexNet algorithm, VGG19 algorithm, ResNet32 algo-
the accuracy of the algorithm proposed in this paper is 1. rithm, the accuracy rates Increased by 14.21%, 3.05%, and
In comparison, the loop closure detection algorithm proposed 1.56%. In contrast, the overall performance of the loop clo-
in this paper has the best effect. From the average accuracy sure detection algorithm proposed in this paper is the best.
V. CONCLUSION
In this paper, we propose a multi-sensor fusion SLAM algo-
rithm framework based on improved Gmapping. Through
theoretical research and experimental verification, it is proved
that the framework applied to mobile robots can make it
work indoors with high robustness and precision. We see
that the SLAM system integrated by vision, laser, encoder
and IMU is generally more stable and accurate. Secondly,
the improved Gmapping algorithm is significantly better than
the Cartographer algorithm and HectorSLAM algorithm in
terms of indoor mapping performance, and the particle con-
vergence speed is 39.85% higher than the original Gmapping
algorithm, which significantly reduces the memory and cal-
culation amount occupied by the algorithm. Finally, the loop
closure detection algorithm proposed in this paper is superior
to the traditional algorithm BoW, AlexNet algorithm, VGG19
algorithm, and ResNet32 algorithm in terms of accuracy, and
is superior to the original RTAB-MAP algorithm in terms of
the construction effect of the environment map.
To address the shortcomings of this paper in future
FIGURE 19. RTAB-MAP mapping results after replacing the loopback research, first, we will test our system in more complex
detection algorithm. (a) Due south; (b) Diagonal direction; (c) Due east. scenarios, and improve the test analysis to compare more
different algorithms to achieve better results. Second, a multi-
E. TWO TYPES OF LOOP CLOSURE DETECTION geometry-based dynamic object detection method will be
ALGORITHM’s COMPARISON EXPERIMENTS added to cooperate with image processing algorithms to fur-
Run RTAB-MAP original algorithm on the experimental plat- ther improve the adaptability and robustness of the system
form for the SLAM mapping experiment, and the results are in dynamic environments. Third, on this basis, explore the
shown in FIG.18. multi-sensor fusion SLAM and multi-frame fusion Gmap-
ping algorithm based on the tightly coupled method of
TABLE 4. Experimental algorithm loop closure detection accuracy. extended Kalman filtering to further improve the perfor-
mance of SLAM.
REFERENCES
[1] M. Yang, ‘‘Overview on issues and solutions of SLAM for mobile robot,’’
Comput. Syst. Appl., vol. 27, no. 7, pp. 1–10, Jul. 2018.
[2] M. Labbé and F. Michaud, ‘‘RTAB-map as an open-source LiDAR and
visual simultaneous localization and mapping library for large-scale and
long-term online operation,’’ J. Field Robot., vol. 36, no. 2, pp. 416–446,
2019.
[3] W. Hess, D. Kohler, H. Rapp, and D. Andor, ‘‘Real-time loop closure
in 2D LiDAR SLAM,’’ in Proc. IEEE Int. Conf. Robot. Autom. (ICRA),
In the experimental platform, the loop closure detection May 2016, pp. 1271–1278.
algorithm introduced above is used to replace the traditional [4] J. Zhang and S. Singh, ‘‘Visual-LiDAR odometry and mapping: Low-
loop closure detection algorithm provided by RTAB-MAP drift, robust, and fast,’’ in Proc. IEEE Int. Conf. Robot. Autom. (ICRA),
May 2015, pp. 2174–2181.
to conduct the SLAM mapping experiment. The results are [5] X. Wang, ‘‘Mobile robot for SLAM research based on LiDAR and binocu-
shown in FIG.19. lar vision fusion,’’ Chin. J. Sensors Actuators, vol. 31, no. 3, pp. 394–399,
RTAB-MAP can build a 3D map of the surrounding envi- Mar. 2018.
[6] K. Konolige, G. Grisetti, R. Kümmerle, W. Burgard, B. Limketkai, and
ronment. Comparing the experimental results, at the red cir- R. Vincent, ‘‘Efficient sparse pose adjustment for 2D mapping,’’ in Proc.
cle mark in FIG.18 and the green circle mark in FIG.19, IEEE/RSJ Int. Conf. Intell. Robots Syst., Oct. 2010, pp. 22–29.
[7] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardós, ‘‘ORB-SLAM: [31] J. Yang, D. Guo, K. Li, Z. Wu, and Y.-K. Lai, ‘‘Global 3D non-rigid
A versatile and accurate monocular SLAM system,’’ IEEE Trans. Robot., registration of deformable objects using a single RGB-D camera,’’ IEEE
vol. 31, no. 5, pp. 1147–1163, Oct. 2015. Trans. Image Process., vol. 28, no. 10, pp. 4746–4761, Oct. 2019.
[8] S. Kumar and R. M. Hegde, ‘‘Multi-sensor data fusion methods for [32] J.-H. Kim, C. Cadena, and I. Reid, ‘‘Direct semi-dense SLAM for
indoor localization under collinear ambiguity,’’ Pervasive Mobile Comput., rolling shutter cameras,’’ in Proc. IEEE Int. Conf. Robot. Autom. (ICRA),
vol. 30, pp. 18–31, Aug. 2016. May 2016, pp. 1308–1315.
[9] X. Ding, Y. Wang, D. Li, L. Tang, H. Yin, and R. Xiong, ‘‘Laser map aided [33] H. Li, C. Tian, L. Wang, and H. Lv, ‘‘A loop closure detection method based
visual inertial localization in changing environment,’’ in Proc. IEEE/RSJ on semantic segmentation and convolutional neural network,’’ in Proc. Int.
Int. Conf. Intell. Robots Syst. (IROS), Oct. 2018, pp. 4794–4801. Conf. Artif. Intell. Electromech. Autom. (AIEA), May 2021, pp. 269–272.
[10] Y. Sun, M. Liu, and M. Q.-H. Meng, ‘‘Active perception for foreground seg- [34] Y. Han, W. Wei, C. Jinhua, D. Didi, and W. Rujia, ‘‘Research on SLAM
mentation: An RGB-D data-based background modeling method,’’ IEEE Gmapping based on AF and AS algorithm optimization,’’ J. Jiangsu Inst.
Trans. Autom. Sci. Eng., vol. 16, no. 4, pp. 1596–1609, Oct. 2019. Technol., vol. 28, no. 2, pp. 93–101, 2022.
[11] L. Zhang, L. Wei, P. Shen, W. Wei, G. Zhu, and J. Song, ‘‘Semantic SLAM
based on object detection and improved octomap,’’ IEEE Access, vol. 6,
pp. 75545–75559, 2018. CHENGJUN TIAN received the Doctor of Engi-
[12] R. Mur-Artal and J. D. Tardós, ‘‘ORB-SLAM2: An open-source slam neering degree from the Changchun University
system for monocular, stereo, and RGB-D cameras,’’ IEEE Trans. Robot., of Science and Technology, in 2011. He is cur-
vol. 33, no. 5, pp. 1255–1262, Oct. 2017. rently an Associate Professor with the Changchun
[13] B. Bescos, J. M. Fácil, J. Civera, and J. L. Neira, ‘‘DynaSLAM: Tracking, University of Science and Technology. His cur-
mapping, and inpainting in dynamic scenes,’’ IEEE Robot. Autom. Lett., rent research interests include pattern recogni-
vol. 3, no. 4, pp. 4076–4083, Oct. 2018. tion and intelligent systems. He is also a member
[14] T. Qin, P. Li, and S. Shen, ‘‘VINS-Mono: A robust and versatile monoc- of the Education and Training Committee of the
ular visual-inertial state estimator,’’ IEEE Trans. Robot., vol. 34, no. 4, China Simulation Society, the Director of the Jilin
pp. 1004–1020, Aug. 2018. Province Automation Society, and the Director of
[15] G. Wan, X. Yang, R. Cai, H. Li, Y. Zhou, H. Wang, and S. Song, ‘‘Robust the Jilin Province Robotics Society.
and precise vehicle localization based on multi-sensor fusion in diverse
city scenes,’’ in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), May 2018,
pp. 4670–4677.
HAOBO LIU received the bachelor’s degree from
[16] H. Xue, H. Fu, and B. Dai, ‘‘IMU-aided high-frequency LiDAR odometry the Jilin Institute of Chemical Technology, Jilin,
for autonomous driving,’’ Appl. Sci., vol. 9, no. 7, p. 1506, Apr. 2019.
China, in 2021. He is currently pursuing the mas-
[17] D. Wisth, ‘‘Unified multi-modal landmark tracking for tightly coupled
ter’s degree with the Changchun University of Sci-
LiDAR-visual-inertial odometry,’’ IEEE Robot. Autom. Lett., vol. 6, no. 2,
ence and Technology, Jilin. His current research
pp. 1255–1262, Apr. 2021.
interests include SLAM and computer vision.
[18] L. Meng, C. Ye, and W. Lin, ‘‘A tightly coupled monocular visual
LiDAR odometry with loop closure,’’ Intell. Service Robot., vol. 15, no. 1,
pp. 129–141, Mar. 2022.
[19] R. Li, S. Wang, Z. Long, and D. Gu, ‘‘UnDeepVO: Monocular visual
odometry through unsupervised deep learning,’’ in Proc. IEEE Int.
Conf. Robot. Autom. (ICRA), Brisbane, QLD, Australia, May 2018,
pp. 7286–7291. ZHE LIU received the bachelor’s degree from
[20] C. Zonghai, ‘‘Monocular visual odometer based on recurrent convolutional the Jiangxi University of Science and Technology,
neural network,’’ Robotics, vol. 41, no. 2, pp. 147–155, 2019. Jiangxi, China, in 2021. She is currently pursuing
[21] Y. Yu, ‘‘A loop closure detection method for visual SLAM based on deep the master’s degree with the Changchun Univer-
learning,’’ Comput. Eng. Des., vol. 41, no. 2, pp. 529–536, 2020. sity of Science and Technology, Jilin, China. Her
[22] Y. Zhou, Y. Wang, F. Poiesi, Q. Qin, and Y. Wan, ‘‘Loop closure detection current research interests include SLAM and com-
using local 3D deep descriptors,’’ IEEE Robot. Autom. Lett., vol. 7, no. 3, puter vision.
pp. 6335–6342, Jul. 2022.
[23] D. Cattaneo, M. Vaghi, and A. Valada, ‘‘LCDNet: Deep loop closure
detection and point cloud registration for LiDAR SLAM,’’ IEEE Trans.
Robot., vol. 38, no. 4, pp. 2074–2093, Aug. 2022.
[24] S. Das, ‘‘Simultaneous localization and mapping (SLAM) using HONGYANG LI received the bachelor’s degree
RTAB-MAP,’’ 2018, arXiv:1809.02989. from the Tianjin University of Science and Tech-
[25] J. Civera and S. H. Lee, ‘‘RGB-D odometry and SLAM,’’ in RGB-D nology, Tianjin, China, in 2016, and the master’s
Image Analysis and Processing. Cham, Switzerland: Springer, Oct. 2019, degree from the Changchun University of Science
pp. 117–144.
and Technology, Jilin, China, in 2021. He is cur-
[26] M. Hsiao, E. Westman, G. Zhang, and M. Kaess, ‘‘Keyframe-based dense rently engaged in SLAM and NLP related work.
planar SLAM,’’ in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Singapore,
May 2017, pp. 5110–5117.
[27] F. Steinbrucker, C. Kerl, D. Cremers, and J. Sturm, ‘‘Large-scale multi-
resolution surface reconstruction from RGB-D sequences,’’ in Proc.
IEEE Int. Conf. Comput. Vis., Sydney, NSW, Australia, Dec. 2013,
pp. 3264–3271.
[28] Y. Sun, M. Liu, and M. Q.-H. Meng, ‘‘Improving RGB-D SLAM in YUYU WANG received the bachelor’s degree
dynamic environments: A motion removal approach,’’ Robot. Auton. Syst., in engineering from the School of Electronic
vol. 89, pp. 110–122, Mar. 2017. Information Engineering, Changchun University
[29] M. Klingensmith, S. S. Sirinivasa, and M. Kaess, ‘‘Articulatedrobotmotion of Science and Technology, Changchun, China,
for simultaneous localization and mapping (ARM-SLAM),’’ IEEE Robot. in 2020. He is currently pursuing the master’s
Autom. Lett., vol. 1, no. 2, pp. 1156–1163, Jul. 2016. degree in control science and engineering. His
[30] R. Scona, M. Jaimez, Y. R. Petillot, M. Fallon, and D. Cremers, ‘‘Static- current research interests include computer vision
Fusion: Background reconstruction for dense RGB-D SLAM in dynamic and motion planning of robotic arms.
environments,’’ in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Brisbane,
QLD, Australia, May 2018, pp. 1–9.