To overcome the limitations of two-dimensional simultaneous localization and mapping (2D-SLAM) technology, which struggles with closed-loop detection in large spatial environments with unstructured and poor lighting conditions, leading to the accumulation of positioning errors, information loss, and low map construction accuracy, solutions need to be found. We focused on the multimodal fusion algorithm of a depth camera and LiDAR based on particle filter at the front end of SLAM. The precision of map information in complex environments is maintained through the utilization of the multilayer iterative closest point matching method for creating accurate laser odometry and screening the loopback candidate set for closed-loop detection of 2D LiDAR data. The loopback information is calibrated through proximity detection, reducing the global map drift phenomenon and enhancing the operational efficiency of the SLAM algorithm through incremental optimization. The public dataset confirms that the proposed algorithm reduces the average relative pose error by 52% compared with the 2D-SLAM algorithm Cartographer. Moreover, it reduces detection time by an average of 15% while maintaining loopback detection accuracy. The SLAM experiment was validated in a complex real-life environment using the Turtlebot2 robot. The results show that the multimodal fusion SLAM method proposed can accurately reconstruct environmental information and achieve high precision and real-time mapping in complex and expansive scenes. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
LIDAR
Detection and tracking algorithms
Visualization
Data fusion
Associative arrays
Matrices
Environmental sensing