Localization and Navigation Analysis of Mobile Robot Based
Localization and Navigation Analysis of Mobile Robot Based
Series
Abstract. In recent years, many advanced computer technologies have appeared on the market,
for example, artificial intelligence.With the maturity of artificial intelligence technology, the
transformation of intelligent robots are also more perfect. The difficulty that we have always
wanted to break through is to install positioning navigation on mobile robots. Through
continuous efforts, the navigation and positioning technology on the robot can now be used
proficiently in a known environment,but we need further research when using it in an unknown
environment. In the navigation and positioning technology, it introduced simultaneous
positioning and map construction algorithms (SLAM, simultaneous localization and mapping).
The algorithm can learn about the surrounding environment information through the sensors
carried by the robot.If this algorithm can be skillfully applied to navigation and positioning
technology, I believe it will be helpful for the application of positioning and navigation
technology in unknown environments.
1. Introduction
This paper mainly introduces the color image and depth image provided by the RGB-D sensor, which
is based on ORB-SLAM.It expands the sparse image of the original system sealing structure into a dense
point cloud image.At the same time, we recommend using an octree structure to construct eight robots.
Finally, the algorithm was successfully applied to autonomous mobile robots.Experimental results show
that the algorithm can achieve autonomous obstacle avoidance under challenging conditions and long-
distance trajectories.
2. Summarize
In a real environment, autonomously moving robots must have the ability to navigate in large,
unstructured, dynamic and unknown spaces. Therefore, mobile robots must be able to construct
environmental maps and realize self-positioning, that is, simultaneous positioning and map
construction.SLAM has received extensive attention and research in the past 30 years. Scholars have
proposed many SLAM methods,including various sensors, optimization techniques and map description
methods. For this reason, Lidar was developed, which can provide accurate environmental geometric
measurement at high frequencies,and its main disadvantages are high price, high energy consumption,
large size and weight [1].
The visual SLAM method uses the camera as a sensor to estimate the robot's pose and environment
map. Klien proposed to divide tracking and map construction into two parallel threads in the monocular
VSLAM system, and realized a parallel tracking and map construction system based on package
adjustment.RGB-D cameras can provide color images and depth images. Compared with monocular
cameras, they can obtain absolute proportions,they avoid the problems of depth calculation and scale
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
ICETIS 2021 IOP Publishing
Journal of Physics: Conference Series 1827 (2021) 012089 doi:10.1088/1742-6596/1827/1/012089
blur. Compared with binocular cameras, they can directly obtain depth information,So the amount of
calculation is small, it reduces the difficulty of implementation.
Kerl proposed to optimize the optical measurement consistency error term to register continuous
RGB-D frames to construct the RGB-D odometer, and then proposed a robust method combining the
optical measurement "uniformity error and the error term based on dense depth data". Estimating and
ignoring map construction Henry et al. proposed that the SLAM system uses an RGB-D camera
combined with an iterative closest point algorithm and visual features to calculate and optimize the
camera pose, thereby constructing a dense point cloud map, but not in real time.
The RGB-D SLAM algorithm proposed by Et et al,it runs on the CPU like the Kerl algorithm, and it
uses the same hardware for evaluation experiments, which has better robustness and accuracy. These
algorithms use random sampling to estimate related visual features. The closed loop between points, and
the use of nonlinear methods to optimize the posture map and the final generated three-dimensional
voxel map of the environment can be applied to robot obstacle avoidance, path planning and navigation
[2]. Because this algorithm uses ICP for pose graph optimization, it relies too much on a good initial
estimate and lacks a measure of overall matching quality.
Since visual aids are usually located on the edge of the object, the depth measurement data is
susceptible to noise interference.In order to reduce the estimation error, ElasticFusion proposed an RGB-
DSLAM algorithm,which is based on plane features. The fixed-point function helps to improve the
accuracy and robustness of traditional ICP.At the same time, it can maintain a reasonable amount of
calculation to ensure real-time performance.ElasticFusion performs visual ranging by combining ICP
with a large number of measured reprojection errors to create a point cloud image.The point cloud image
is very accurate, but it requires CPU acceleration. Point cloud imaging consumes a lot of resources, and
it is not conducive to maintenance.This question does not apply to robot applications. Mur-Arta I and
others proposed ORB-SLAM.It estimates poses and textures in real time based on image sequences.It is
more accurate and reliable than the above method,and it supports conventional monocular, binocular
and RGB-D cameras.ORB-SLAM can complete real-time attitude estimation in indoor and outdoor
environments of different sizes,it can create sparse resource point maps, provide closed-loop detection,
extensive baseline redistribution, and fully automatic initialization.Compared with RGB-D SLAM, its
disadvantage is that it limits the scope of application of the constructed sparse feature point map. It
focuses on the positioning function and cannot be used for obstacle avoidance, route planning or
autonomous robot navigation [3].
2
ICETIS 2021 IOP Publishing
Journal of Physics: Conference Series 1827 (2021) 012089 doi:10.1088/1742-6596/1827/1/012089
(1)
Among them: f u , f v , Cu , Cv is the internal parameters of the camera, obtained by calibrating the
camera. For RGB-D cameras, z i it can be obtained through depth image data. So far, the pixel point
T
can be calculated from the formula (1) (u, v) the corresponding spatial point ( xiC , yiC , ziC )T the
coordinate in the camera coordinate system. For all pixels in the key frame, the coordinates in the camera
coordinate system are calculated by formula (1). Point in 3D space, coordinates in the world coordinate
system PiW the transformation between the coordinates in the camera coordinate system and the
coordinates in the camera coordinate system can be calculated by equation (2):
PiW R Pi t (2)
In this formula, R represents the rotation matrix of the camera, and t represents the displacement
vector. The coordinates in the world coordinate system can be obtained by changing the formula (2),
and then the point cloud image corresponding to the key frame can be obtained[5]. As the scale of key
frames becomes larger and larger, maintenance operations become more difficult. For robotic
applications, such large-scale spatial points are not practical and cause a lot of waste. The octree map
has a very convenient update mode, and can also be used to describe all 3D environments. Compared
with the point cloud map, the octree structure hierarchical mode is used for storage, which can
effectively compress and update the map, and can also intelligently map the resolution, thereby solving
the problem of not being compact and avoiding obstacles. And navigation. The form of the use
probability of the leaf nodes in the octree structure indicates the occupied, empty or unknown state of
the space, that is, whether the point in the 3D space has obstacles or can pass through. In the process of
constructing the map, for leaf node n, the observation data at 1, 2,..., k can be known from Bayesian
theory as z1, z2,..., zk, and through this node is:
Among them Q ( n ) In general, it is assumed to be 0.5. Here we introduce the logit transformation:
logit ( q ) ln( q /(1 q )) take the logit transformation on both sides of equation (3), and after
simplification of Q(n)=0.5:
(3)
It can be seen from equation (4) that for any point in the three-dimensional space, each new
observation is directly added to the previous observation result, and the update method is flexible and
easy to implement. The occupancy probability of the parent node in the octree map can be calculated
1 8
according to the value of the child node. There are mainly two methods: l(n) l (ni ) taking the
8 i 1
average value: l̂ ( n) max L(ni ) and taking the maximum value.
i
3
ICETIS 2021 IOP Publishing
Journal of Physics: Conference Series 1827 (2021) 012089 doi:10.1088/1742-6596/1827/1/012089
4
ICETIS 2021 IOP Publishing
Journal of Physics: Conference Series 1827 (2021) 012089 doi:10.1088/1742-6596/1827/1/012089
6. Conclusion
The ORB-SLAM-based system is a new system researched by RGB-D sensors---VSLAM system.
Through key frame data,the new system can make the original system's map construction more perfect .It
allows the robot to better avoid obstacles effectively, which is conducive to the maintenance of the robot.
Acknowledgment
This paper is funded by 2020 Guangdong Provincial Universities Characteristic Innovation Project
“Research on VR Navigation System in High-risk Indoor Environment Based on Mobile Robot SLAM
Technology”(2020KTSCX387)
References
[1] Wang Hongjian, Fu Guixia, BIAN Xinqian, LI Juan. Synchronous Positioning and Map
Construction of Mobile Robots based on SRCKF [J]. Robotics,2013:74-81.
[2] Wang Quan, Hu Yue-li, Zhang He. Simultaneous positioning and map construction of mobile
robots based on ROS and Kinect [J]. Research in Computer Applications,2017:310-313.
[3] Zhao Xiyu, Shen Yiming. Research on Simultaneous Positioning and Map Construction of Mobile
Robot based on ICP [J]. Mechanical Engineering and Automation,2018:73-75.
[4] Lin Hui-can, LU Qiang, WANG Guo-sheng, ZHANG Yang, LIANG Bing. Simultaneous 3D
localization and map construction of autonomous mobile robot based on VSLAM [J].
Computer Applications,2017.
[5] Zhang Yi, ZHAO Liming, LUO Yuan. Research on Simultaneous Positioning and Map
Construction of Mobile Robot based on MIEKF [J]. Computer Application
Research,2011:108-110.
[6] Chen Qunying. Research on Simultaneous Positioning and Map Construction of Mobile Robots
based on adaptive Method [J]. Microcomputer Applications,2018.
[7] Peng Weizhi, Yuan Fengwei, Zhou Zhiwei. Analysis and Implementation of Mobile Robot
Localization Based on RGB-D Camera [J]. Intelligent Computer and Applications,2019:175-
177+182.
[8] ZHANG Yidong. Analysis of Navigation and Positioning Technology for Autonomous Mobile
Robot [J]. Integrated Circuit Applications,2017
[9] Hu Chunxu, Xiong Xiao, Ren Wei, He Dingxin. Localization and Navigation of Indoor Mobile
Robot Based on Embedded System [J]. Journal of Huazhong University of Science and
Technology (Natural Science Edition),2013:265-268+277.
[10] Wang Quan, Hu Yueli, Zhang He. Mobile Robot Localization and Map Construction Based on
ROS and Kinect [J]. Computer Application Research,2017:310-313.