ROS2 - Powered Autonomous Navigation For TurtleBot3 Integrating Nav2 Stack in Gazebo RViz and Real-World Environments
ROS2 - Powered Autonomous Navigation For TurtleBot3 Integrating Nav2 Stack in Gazebo RViz and Real-World Environments
Abstract—This research focuses on developing and navigation. Navigation is fundamental to autonomous robots,
implementing autonomous navigation capabilities for the enabling them to traverse complex environments, avoid
TurtleBot3 robot platform using Robot Operating System 2 obstacles, and efficiently reach designated goals. To enhance
(ROS2) and the Navigation2 (Nav2) stack. The aim is to the navigation capabilities of TurtleBot3, the Nav2
empower the TurtleBot3 robot to achieve autonomous (Navigation 2) stack, built upon the Robot Operating System2
navigation within simulated environments crafted in Gazebo, (ROS2) framework [1], offers a robust and customizable
with the visualization of the navigation process facilitated solution. The Nav2 stack offers extensive tools and algorithms
through ROS Visualization (RViz). Additionally, consideration dedicated to autonomous navigation, encompassing mapping,
is given to implementing these navigation capabilities in real-
localization, path planning, and obstacle avoidance.
world scenarios. The work utilizes ROS2, known for its
improved performance and scalability. The Nav2 stack, a
Autonomous navigation is a complex task that necessitates the
modular and configurable navigation framework for ROS2, is development of an internal spatial representation grounded in
employed to facilitate autonomous navigation tasks. recognizable landmarks and robust visual processing,
Additionally, Gazebo is the simulation environment, providing facilitating continuous self-localization and destination
realistic virtual environments for testing navigation algorithms. representation.
RViz is used to visualizing mapping data and navigation When deployed in environments like factories, equipping
parameters, enhancing the understanding of the navigation robots with accurate prior models of the environment is
process. The novelty of the work extends to both simulation and
usually impractical. Hence, robots must first create a model of
hardware implementations. The work involves loading pre-
created maps, performing initial position and orientation (pose)
the environment. For mobile robots, this typically involves
estimation, setting navigation goals, planning paths, and generating a map for self-localization and collision-free path
executing autonomous navigation to reach specified destinations planning based on assigned tasks [2]. The TurtleBot3 Burger
while avoiding obstacles. Key components include sensor data is a popular and versatile mobile robot platform designed to
processing, path planning algorithms, and real-time be affordable and user-friendly. It is an attractive option for
visualization of the navigation process. Overall, this project research and education in robotics, particularly for exploring
aims to demonstrate the effectiveness of ROS2 and the Nav2 autonomous navigation [3].
stack in enabling autonomous navigation for the TurtleBot3
Yurtsever et al. have recently examined the evolving
robot platform in both simulated and real-world environments.
To assess its efficacy in real-world scenarios, a hardware
technologies and standard practices within autonomous
Turtlebot3 will be developed. The paper contributes to driving, shedding light on persistent challenges and system
developing effective navigation solutions for robotic architectures that remain unresolved. The study reviewed
applications by bridging simulation and real-world scenarios.. methodologies, core functions, and technical aspects of
In conclusion, this work has demonstrated the capabilities and automated driving [4]. Zhu et al. reviewed deep reinforcement
effectiveness of real-world navigation using the TurtleBot3 learning (DRL) techniques and a DRL-centric navigation
platform. framework, offering a comparative analysis of their utilization
across indoor navigation, social navigation, and obstacle
Keywords— ROS2, TurtleBot3, Nav2 Stack, Gazebo, RViz avoidance contexts.They discussed the development of DRL-
based navigation systems in various scenarios. The review
I. INTRODUCTION may lack sufficient empirical validation of the discussed
In recent times, mobile robotics has experienced notable DRL-based navigation systems [5]. Nicholson et al.
progress, especially in autonomous navigation. Among the developed a technique for 2D object detection with
platforms contributing to these advancements, TurtleBot3 has simultaneous evaluation of quadric 3D surfaces, addressing
emerged as a versatile and accessible platform for robotics the challenge of partly visible objects. The evaluation of
enthusiasts, researchers, and developers. With its compact camera position and orientation (pose) and quadratic
design, affordability, and open-source nature, TurtleBot3 is an parameters was performed in a factor-based SLAM
ideal platform for experimentation and innovation in robotic framework[6]. Qin et al. proposed a robust estimator for
979-8-3503-7613-5/24/$31©2024 IEEE
Authorized licensed use limited to: Inha University. Downloaded on January 26,2025 at 05:46:34 UTC from IEEE Xplore. Restrictions apply.
monocular visual-inertial state estimation, achieving higher Nav2 adopt the costmap layers method, where each layer
accuracy through feature-based optimization and loop tracks a specific type of obstacle or constraint, modifying a
detection. Their versatile system showed promise for master costmap for path planning. The base layers in Nav2
applications requiring precise localization [7]. include the static, obstacle, and inflation layers, each serving
unique functions to ensure safe navigation.
Motivated by the literature reviewed, this paper
investigates leveraging the Nav2 stack to enhance the Dynamic obstacle handling is primarily addressed locally
navigation capabilities of TurtleBot3. Fig. 1., shows the within the Controller Server. Dynamic obstacle avoidance is
assembled TurtleBot3.An overview of the Nav2 architecture managed by Controller Server plugins like the Dynamic
is provided, highlighting its key components and Window Buffer (DWB) Controller and the Timed Elastic
functionalities. Furthermore, the integration of Nav2 with Band (TEB) Controller. The DWB Controller, successor to the
TurtleBot3 is discussed, outlining the steps involved in setting Dynamic Window Approach controller in ROS1, and the TEB
up a navigation system on the platform. Through a series of Controller are among the plugins deployed for local planning
experiments and case studies, the effectiveness and versatility in Nav2. The Dynamic Obstacle Layer is an additional
of the TurtleBot3&Nav2 combination in various real-world costmap layer plugin to the local costmap. It integrates
scenarios are demonstrated. The ability to navigate dynamic velocity and orientation data of dynamic obstacles into the
environments, adapt to changing conditions, and accomplish occupancy grid, supplementing current controller plugins and
complex tasks autonomously is showcased. Additionally, the augmenting navigation capabilities[10].
customization options offered by the Nav2 stack are explored,
allowing users to tailor the navigation system according to B. Robot Operating System 2 (ROS2)
specific requirements and preferences. Our work contributes ROS 2, the second iteration of the ROS, is an open-source
to the growing body of research aimed at democratizing framework designed for developing robot software. It
robotics and making advanced navigation capabilities comprises a set of tools, libraries, and standards intended to
accessible to a wider audience. Leveraging the open-source streamline the creation of intricate and resilient robot
nature of TurtleBot3 and the flexibility of the Nav2 stack behaviours across diverse platforms[11]. ROS 2 was
empowers robotics enthusiasts, educators, and researchers to developed to address some limitations of ROS 1 and to cater
explore new frontiers in autonomous navigation and robotics to the evolving needs of the robotics community. ROS 2
applications. provides better support for real-time and deterministic
behaviour, making it suitable for applications with strict
II. BACKGROUND timing requirements. ROS2 is designed to scale better than
ROS1, both in terms of computational resources and team
A. Nav2: A Navigation System size. It supports distributed computing architectures, allowing
Nav2 introduces a modular and dynamically for more complex and distributed robot systems. And it
reconfigurable core featuring a Behavior Tree (BT) navigator includes features for secure communication between nodes,
and task-specific asynchronous servers: Planner, Controller, ensuring data integrity and confidentiality in robot systems
and Recovery servers [8]. These servers, functioning as action deployed in sensitive environments. The ROS2 uses a new
servers, oversee the environmental representation used by middleware called Data Distribution Service [12], which
algorithm plugins to generate outputs, all orchestrated by the offers better performance and reliability compared to the
BT navigator. The Planner plugins focus on global planning, middleware used in ROS 1.
computing a valid and potentially optimal path from the
robot's current position to a designated goal position. In III. METHODOLOGY
contrast, Controller plugins, serving as replacements for The research methodology employed in this study
Nav1's local planner, establish a feasible control strategy to comprehensively evaluates the performance of TurtleBot3
follow the global plan, utilizing a local environmental Burger's navigation and obstacle avoidance algorithms.
representation for localized planning. Additionally, Recovery Initially, experiments are conducted using the TurtleBot3
behaviors, activated by the BT in the event of navigation Burger platform in various simulated and/or real-world
failure, handle recovery actions. Both Planner and Controller environments to gather empirical data. These experiments
servers operate on two distinct costmap representations of the involve executing navigation tasks, such as reaching
environment: the global costmap and the local costmap, designated goals or following predefined paths, while
respectively [9]. The global costmap relies on a pre-loaded encountering diverse environmental conditions and obstacles.
static map, whereas the local costmap utilizes sensor Quantitative data is collected during these experiments,
information. including metrics such as navigation time, path completion
rate, distance travelled, and obstacle detection frequency.
Additionally, qualitative observations are made regarding the
robot's behaviour, such as its ability to navigate complex
environments, respond to dynamic obstacles, and adapt its
trajectory in real-time.
Following data collection, statistical methods are applied
to the collected quantitative data to discern patterns and trends
in the performance of the robot's navigation and obstacle
avoidance algorithms. Moreover, qualitative data obtained
from observations and user feedback may be analyzed using
thematic analysis, content analysis, or other qualitative
Fig. 1. Propose Model of TurtleBot3 Burger
research methods to gain insights into the navigation
Authorized licensed use limited to: Inha University. Downloaded on January 26,2025 at 05:46:34 UTC from IEEE Xplore. Restrictions apply.
algorithms' effectiveness, usability, and limitations from a Within the TurtleBot3 Burger platform [14], the LDS data
user perspective. acquisition is managed through a ROS package named
"hls_lfcd_lds_driver". This package includes a driver node
By combining quantitative analysis of performance tasked with interfacing directly with the LDS sensor,
metrics with qualitative insights from observations and user facilitating the transmission of sensor data in the form of ROS
feedback, the research methodology provides a
messages. These messages, formatted as
comprehensive understanding of the strengths and
"sensor_msgs/LaserScan", contain comprehensive
weaknesses of TurtleBot3 Burger's navigation and obstacle information regarding the range and intensity of laser beams
avoidance capabilities. This holistic approach enables
at specific angles. Upon acquisition, sensor data undergoes
researchers to identify areas for improvement, optimize
processing to extract valuable insights about the environment.
algorithm parameters, and inform future development efforts A common method involves utilizing the "gmapping" package
to enhance the overall navigation performance of the robot. in ROS, renowned for its probabilistic Simultaneous
The research methodology for Autonomous Mobile Robot
Localization and Mapping (SLAM) approach. Gmapping
Navigation using TurtleBot3 Burger consists of two primary employs sensor data to create a map of the environment while
components: the experimental setup and the sensor data simultaneously estimating the robot's position and orientation
acquisition. within the map. Alternatively, clustering algorithms can be
A. Experimental Setup applied to LDS data to identify clusters of points representing
The experimental setup involves deploying TurtleBot3 distinct objects in the environment. For instance, the
Burger, equipped with sensors and software, to evaluate "euclidean_cluster_extraction" package in ROS can group
various navigation and obstacle avoidance algorithms. Fig. 2., nearby points into clusters based on their proximity.
shows the setup of the real-world environment. Operating In TurtleBot3, LDS data is commonly structured as ROS
autonomously, the robot traverses environments while "LaserScan" messages [15], which provide comprehensive
autonomously navigating around obstacles. Serving as the details regarding each laser beam's range and angle emitted by
primary robotic platform for experiments, TurtleBot3 Burger the 2D LiDAR sensor.
is equipped with a range of sensors, including a Laser distance
sensor (LDS), a camera, and an inertial measurement unit Developers can subscribe to the "LaserScan" message
(IMU), enabling obstacle detection and autonomous using a ROS subscriber in their code, enabling them to process
navigation [13]. Implementing navigation and obstacle the data for various purposes. For instance, they can utilize the
avoidance algorithms is facilitated through the ROS sensor data to create a 2D map of the environment or
navigation stack and Nav2 Navigation Stack. Experiments are implement real-time obstacle avoidance algorithms discussed
conducted in controlled settings to ensure consistency, earlier. Efficient acquisition and processing of LDS data are
typically in a room with strategically positioned obstacles. crucial for TurtleBot3 Burger's navigation and obstacle
avoidance capabilities. By accurately sensing and interpreting
The experimental setup is custom-tailored to evaluate the the robot's surroundings, informed movement decisions are
effectiveness of various navigation and obstacle avoidance made to navigate the environment while avoiding collisions
algorithms in diverse scenarios. Insights obtained from these with obstacles.
experiments enable the assessment of algorithm performance
and the identification of areas for improvement. Employing a IV. RESULT & DISCUSSION
research design that combines quantitative and qualitative This section of the experimental discussion is structured
methods, the setup utilizes TurtleBot3 Burger to assess into two segments: the simulation of TurtleBot navigation in
various navigation and obstacle avoidance algorithms under Gazebo and the analysis of real-world navigation results.
varying conditions. The data generated from these evaluations Initially, the outcomes of the simulation conducted in Gazebo
assists in gauging the efficacy of the algorithms and are presented, followed by a discussion of the results obtained
uncovering opportunities for further enhancement. from real-world experiments. TurtleBot3 supports a
B. Sensor Data Acquisition simulation development environment, facilitating
programming and development using a virtual robot within
The LDS is widely adopted for obstacle detection and the simulation framework. Two development tools serve this
avoidance in autonomous mobile robots. Its operation purpose: one entails employing a simulated node alongside the
involves emitting laser beams and measuring the time taken 3D visualization tool RViz, while the other utilizes the 3D
for these beams to reflect from objects surrounding the robot. robot simulator Gazebo. Gazebo is a versatile platform for
By analyzing these reflections, the LDS provides information conducting simulations across various virtual environments,
about the distance and position of objects in the robot's while RViz specializes in visualizing mapping data and its
vicinity. related parameters. To ensure compatibility with ROS2 Foxy,
it is imperative to install the correct version of Gazebo before
executing instructions for the Gazebo Simulation SLAM is a
method employed to create a map of an environment while
concurrently estimating the robot's present location within that
space. This functionality has been a fundamental aspect of
TurtleBot across its various iterations.
Navigation necessitates a map containing geometric
details concerning the environment's furniture, objects, and
walls. Navigation enables the robot to transition from its
current location to a predefined goal position on the map,
Fig. 2. Environmental Setup leveraging inputs from the robot’s encoders, IMU sensor, and
Authorized licensed use limited to: Inha University. Downloaded on January 26,2025 at 05:46:34 UTC from IEEE Xplore. Restrictions apply.
distance sensor. Here is the outlined procedure for executing
this task.
A. Gazebo Simulation of Navigation
The TurtleBot3 Simulation Package depends on two
prerequisite packages: "turtlebot3" and "turtlebot3_msgs".
These packages play a crucial role in setting up the simulation
environment. Without the presence of these essential
packages, the initiation of the simulation process is not
possible.
Fig. 4. Turtlebot3 World
The "turtlebot3" package provides the core functionalities
and configurations necessary to simulate the TurtleBot3 robot
model in a simulated environment. It includes robot
descriptions, controllers, launch files, and other resources to
instantiate and control the virtual robot. On the other hand, the
"turtleBot3_msgs" package contains message definitions
specific to TurtleBot3, such as sensor data formats, control
commands, and other communication protocols. These
message definitions enable seamless communication between
different components of the simulation system, allowing for
the exchange of data and commands between the simulated
Fig. 5. Turtlebot3 House
robot and external entities. Together, these prerequisite
packages ensure the proper initialization and functioning of 4) Navigation in Gazebo
the TurtleBot3 simulation environment. There are three
simulation environments available for TurtleBot3: Users can select or create various environments and robot
1) Empty World. models within the virtual Navigation world. The pre-existing
Fig. 3., shows that the environment provides a clean, open map is utilized before initiating the Navigation node. Unlike
space with no obstacles or structures. It is suitable for basic physical navigation, Fig. 6 shows that navigation simulation
navigation tasks and testing fundamental robot behaviours. involves preparing the simulation environment rather than
deploying the physical robot. Before starting navigation, the
2) Turtlebot3 World map undergoes calibration via RViz from the TurtleBot3's
Fig. 4., shows the TurtleBot3 World environment, where initial position within the TurtleBot3 house. It is essential to
users have access to a predefined world populated with perform Initial Pose Estimation before running Navigation, as
various obstacles, walls, and structures. This environment is this step initializes vital AMCL parameters necessary for
designed to offer a more realistic setting for testing navigation Navigation. The accurate positioning of the TurtleBot3 on the
and obstacle avoidance algorithms in moderately complex map is crucial, aligning with the LDS sensor data to neatly
environments. By providing a diverse range of obstacles and overlap the displayed map. During the refinement process, the
structures, such as walls, furniture, and other objects, robot is slightly moved back and forth to gather information
TurtleBot3 World enables users to simulate real-world about the surrounding environment and enhance the estimated
scenarios that the robot may encounter during operation. This location of the TurtleBot3 on the map. Upon termination of
includes navigating through narrow passages, avoiding the teleoperation node, users can set the navigation goal using
obstacles of different shapes and sizes, and traversing uneven the navigation goal button within RViz. This button lets users
terrain. specify the desired destination for the robot to navigate. The
arrow's root denotes the x and y coordinates of the destination,
3) TurtleBot3 House
while its orientation determines the angle. Once the x and y
The TurtleBot3 House environment, as indicated in Fig.
coordinates are set, the TurtleBot3 will move towards the
5., simulates a household environment with rooms, furniture,
destination. A basic collision avoidance node is developed to
and other domestic elements. It presents a challenging achieve autonomous collision avoidance, ensuring a safe
scenario for navigation tasks, including tight spaces, cluttered distance from obstacles and making turns to prevent
areas, and multiple rooms, allowing for comprehensive testing
collisions.
of the robot's capabilities in realistic indoor settings.
Authorized licensed use limited to: Inha University. Downloaded on January 26,2025 at 05:46:34 UTC from IEEE Xplore. Restrictions apply.
In the TurtleBot3 World simulation, the collision
avoidance node is executed after terminating the teleoperation
node. It is important to note that this functionality only applies
to the simulation environment and does not translate to real-
world scenarios.
B. Real-World Navigation
In real-world navigation, the process involves loading a
previously saved map of the environment. This map, Fig. 7.,
contains geometric information about the surroundings,
including obstacles, walls, and other structures. By loading the Fig. 10. Setting Navigation Goal
saved map, the robot gains knowledge of its operational
environment, allowing it to navigate effectively and avoid Once the path is planned, it is represented as a sequence of
collisions with obstacles. This process typically begins with waypoints or poses in the robot's coordinate frame. These
initial pose estimation. This step involves determining the waypoints guide the robot's motion along the planned
initial pose of the robot within the environment in Fig. 8., trajectory, providing specific locations to move towards as it
initial pose estimation is crucial for accurate localization, progresses towards the goal.
allowing the robot to establish its position relative to the Reaching the destination signifies that the robot has
loaded map and plan its navigation accordingly. Fig. 9., shows successfully arrived at the specified goal location, as indicated
that setting the navigation goal involves specifying the desired by the main point in Fig. 11. This achievement indicates the
destination or target location the robot needs to reach within completion of the navigation task. It signifies the robot's
the environment. The starting point is indicated using red ability to navigate to desired destinations autonomously.
marking This goal defines the endpoint of the robot's planned
trajectory and guides its navigation behaviour. Reaching the destination involves the robot following the
path generated during path planning. As the robot moves
Once the navigation goal is set, the next step is to plan a towards the goal, it continuously monitors its position and
path from the robot's current position to the specified goal adjusts its trajectory to navigate around obstacles and adhere
location, shown in Fig.10. The path planning process typically to the planned path. The result in Fig. 12., is destination reach
begins with an initial exploration of the environment to gather in Rviz and Fig. 13., indicates the result in a real-world
information about obstacles and constraints. Based on this environment and the destination point is marked using yellow
information, the path planning algorithm computes a marking.
trajectory that navigates around obstacles while optimizing for
path length, smoothness, and clearance from obstacles.
Fig. 7. Loaded the Saved Map Fig. 11. Path Planned to the Goal
Authorized licensed use limited to: Inha University. Downloaded on January 26,2025 at 05:46:34 UTC from IEEE Xplore. Restrictions apply.
V. CONCLUSION [3] Z. Al-Mashhadani, M. Mainampati, and B. Chandrasekaran,
“Autonomous Exploring Map and Navigation for an Agricultural
In conclusion, this work has demonstrated the capabilities Robot,” 2020 3rd International Conference on Control and Robots,
and effectiveness of real-world navigation using the ICCR 2020, pp. 73–78, Dec. 2020, doi:
TurtleBot3 platform. Utilizing the TurtleBot3 simulation 10.1109/ICCR51572.2020.9344404.
package has allowed for creating virtual environments where [4] E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda, “A Survey of
Autonomous Driving: Common Practices and Emerging
navigation algorithms can be tested and refined in a controlled Technologies,” IEEE Access, vol. 8, pp. 58443–58469, 2020, doi:
setting. Using the flexibility of simulation, diverse 10.1109/ACCESS.2020.2983149.
environments and scenarios were simulated, providing [5] K. Zhu and T. Zhang, “Deep reinforcement learning based mobile robot
valuable insights into the performance of navigation navigation: A review,” Tsinghua Sci Technol, vol. 26, no. 5, pp. 674–
algorithms under different conditions. In the real-world 691, Oct. 2021, doi: 10.26599/TST.2021.9010012.
navigation experiments, saved maps were successfully loaded, [6] L. Nicholson, M. Milford, and N. Sunderhauf, “QuadricSLAM: Dual
initial pose estimation was performed, navigation goals were quadrics from object detections as landmarks in object-oriented
set, and path planning was executed to guide the TurtleBot3 SLAM,” IEEE Robot Autom Lett, vol. 4, no. 1, pp. 1–8, Jan. 2019, doi:
10.1109/LRA.2018.2866205.
to its destination. These experiments have confirmed the
[7] T. Qin, P. Li, and S. Shen, “VINS-Mono: A Robust and Versatile
efficacy of the navigation system in real-world scenarios, Monocular Visual-Inertial State Estimator,” IEEE Transactions on
showcasing its capability to navigate while evading obstacles Robotics, vol. 34, no. 4, pp. 1004–1020, Aug. 2018, doi:
and adhering to environmental limitations autonomously. 10.1109/TRO.2018.2853729.
[8] S. Gugliermo, E. Schaffernicht, C. Koniaris, and F. Pecora, “Learning
Going forward, additional enhancements and Behavior Trees from Planning Experts Using Decision Tree and Logic
optimizations can be implemented to enhance the efficiency Factorization,” IEEE Robot Autom Lett, vol. 8, no. 6, pp. 3534–3541,
of the navigation system. This may include refining Jun. 2023, doi: 10.1109/LRA.2023.3268598.
algorithms for path planning and obstacle avoidance, [9] D. D. Fan, A. A. Agha-Mohammadi, and E. A. Theodorou, “Learning
integrating additional sensors for enhanced perception, and Risk-Aware Costmaps for Traversability in Challenging
exploring advanced localization techniques for improved pose Environments,” IEEE Robot Autom Lett, vol. 7, no. 1, pp. 279–286,
Jan. 2022, doi: 10.1109/LRA.2021.3125047.
estimation accuracy. Overall, this work has provided valuable
[10] L. Mengmeng, Y. Xiaofei, X. Zhengrong, W. Qi, and H. Jiabao,
insights into the mobile robotics navigation field and laid the “Dynamic Route Planning Method Based on Deep Reinforcement
groundwork for future research and development in this area. Learning and Velocity Obstacle,” Proceedings of 2023 IEEE 12th Data
The TurtleBot3 platform exhibits considerable promise Driven Control and Learning Systems Conference, DDCLS 2023, pp.
through ongoing innovation and experimentation across 627–632, 2023, doi: 10.1109/DDCLS58216.2023.10166409.
various applications, from indoor navigation within household [11] T. B. Sant’anna, M. B. Argolo, and R. T. Lima, “Comparative analysis
settings to outdoor exploration and surveillance missions. in real environment of trajectory controllers on ROS2,” Proceedings -
2023 Latin American Robotics Symposium, 2023 Brazilian
Symposium on Robotics, and 2023 Workshop of Robotics in
VI. ACKNOWLEDGEMENT Education, LARS/SBR/WRE 2023, pp. 308–312, 2023, doi:
The authors extend their heartfelt gratitude to RUSA 2.0 10.1109/LARS/SBR/WRE59448.2023.10332973.
Major Project T4J for the generous funding for the research [12] P. Phueakthong and J. Varagul, “A Development of Mobile Robot
project titled "Design and Development of a Robot Controller Based on ROS2 for Navigation Application,” International Electronics
Symposium 2021: Wireless Technologies and Intelligent Systems for
for Synchronized Cooperative Control of Heterogeneous Better Human Lives, IES 2021 - Proceedings, pp. 517–520, Sep. 2021,
Industrial Robot Arms". This support significantly contributed doi: 10.1109/IES53407.2021.9593984.
to the successful execution of the project and the attainment [13] L. Hamad, M. A. Khan, and A. Mohamed, “Object Depth and Size
of its objectives. Estimation Using Stereo-Vision and Integration with SLAM,” IEEE
Sens Lett, vol. 8, no. 4, pp. 1–4, Apr. 2024, doi:
REFERENCES 10.1109/LSENS.2024.3367956.
[1] T. Schworer, J. E. Schmidt, and D. Chrysostomou, “Nav2CAN: [14] H. Lu, S. Yang, M. Zhao, and S. Cheng, “Multi-Robot Indoor
Achieving Context Aware Navigation in ROS2 Using Nav2 and RGB- Environment Map Building Based on Multi-Stage Optimization
D sensing,” IST 2023 - IEEE International Conference on Imaging Method,” Complex System Modeling and Simulation, vol. 1, no. 2, pp.
Systems and Techniques, Proceedings, 2023, doi: 145–161, Jun. 2021, doi: 10.23919/CSMS.2021.0011.
10.1109/IST59124.2023.10355731 [15] Y. Huang, T. Shan, F. Chen, and B. Englot, “DiSCo-SLAM:
[2] Y. Gao, J. Liu, M. Q. Hu, H. Xu, K. P. Li, and H. Hu, “A New Path Distributed Scan Context-Enabled Multi-Robot LiDAR SLAM with
Evaluation Method for Path Planning with Localizability,” IEEE Two-Stage Global-Local Graph Optimization,” IEEE Robot Autom
Access, vol. 7, pp. 162583–162597, 2019, doi: Lett, vol. 7, no. 2, pp. 1150–1157, Apr. 2022, doi:
10.1109/ACCESS.2019.2950725 10.1109/LRA.2021.3138156.
Authorized licensed use limited to: Inha University. Downloaded on January 26,2025 at 05:46:34 UTC from IEEE Xplore. Restrictions apply.