Navigation
Navigation
Navigation
[email protected]
A Guide for 3D Mapping with Low-Cost
Sensors Using ROS
Abstract Open source software and low-cost sensors bring unquestionable advan-
tages to the robotics community, facilitating the access to a wide range of robotic
applications to virtually anyone, and playing an important role in recent advances.
With the progressive overthrown of the cost barrier, development of robotic solu-
tions has become more widespread among our society, as witnessed by the increase
of service robotic applications to consumers. Despite the ease of access to low-cost
sensors nowadays, knowledge of sensor integration and robotics middleware is still
imperative to design successful robotic solutions. In this tutorial chapter, we provide
an educational guide that leverages open source tools, such as the Robot Operating
System (ROS), and low-cost sensors, such as the Microsoft Kinect v2, to design a
full 3D indoor Mapping system. The ultimate goal of the system is to create a com-
prehensive and detailed 3D map of any indoor environment. Besides going through
all the key steps to setup such a system and presenting interactive results, we also
provide an experimental dataset that we hope can be useful to the community in the
future.
1 Introduction
The market of sensors for robotics is changing: precisions sensors for robotic appli-
cations, such as time-of-flight (TOF) cameras, laser range finders (LRFs) or Inertial
Measurement Units (IMUs) used to be only available to some companies and/or
[email protected]
4 D. Portugal et al.
robotics institutes, due to their high cost. In recent years, there has been an evi-
dent effort to provide low-cost sensory alternatives to replace traditionally expensive
sensors, thanks to the vision of a rising number of robotic companies leading to
an increased investment in robotics worldwide, which has been crucial in bringing
service robots closer to the consumer.
Additionally, open source projects such as the Arduino electronics platform [1],
OpenCV [2] or the MORSE and Gazebo simulators [3] have been fundamental for
educational robotics, significantly helping roboticists to get started. In this realm,
one of the most important steps is the wide take-up of a quasi-standard robotics
middleware: the Robot Operating System (ROS1 ). Nowadays, most research robots
run ROS, and its adoption is continuously growing in the industry. ROS has a peer-
to-peer, modular, tools-based, free and open source nature [4], allowing software
developers to create robotic applications in a quick and easy way. Moreover, ROS is
a language-independent framework, which provides hardware abstraction, low-level
device control, implementation of commonly-used functionalities, message-passing
between processes and package management. ROS promotes code reuse with dif-
ferent hardware, providing a large amount of libraries available for the community,
like laser-based 2D SLAM2 [5], 3D point cloud based object recognition [6], Adap-
tive Monte Carlo [7] and Extended Kalman Filter localization [8], robot navigation
software [9], manipulation libraries [10], serial communication [11] among others,
as well as tools for 3D visualization (rviz), recording experiments and playing back
data offline (rosbag), and more. Due to its several features, ROS is currently used
worldwide, having regular updates and broad community support, which enables the
users to obtain, build, write, test and run ROS code. Clearly, integrating robots and
sensors in ROS is highly beneficial.
In this work, we provide an educational guide for setting up a system that anyone
can use to build comprehensive and detailed 3D maps of indoor environments. The
proposed system takes advantage from open source tools, such as ROS, two open
source mapping algorithms that have also been integrated in ROS (RTAB-Map and
Hector Mapping), as well as two widely available low-cost sensors (Microsoft Kinect
v2 and Slamtec RPLIDAR A2). Next, we overview seminal work on 3D mapping
approaches, and existent solutions in ROS for building comprehensive 3D maps
with diverse sensors. We then present the sensors and algorithms used in this tutorial
chapter, and we will focus on the system functionality in what remains, guiding the
user towards creating a low-cost 3d indoor mapping system based on ROS. We also
provide a dataset for those who do not have access to similar sensors to test the
approach proposed, and we finish the chapter with conclusions and future work.
1 http://www.ros.org.
2 SLAM stands for the well-known Simultaneous Localization And Mapping problem.
[email protected]
A Guide for 3D Mapping with Low-Cost Sensors Using ROS 5
2 Background
Building precise 3D maps is useful for several practical applications and it is consid-
ered a fundamental task in Robotics. Mapping may be crucial to acquire precise local-
ization, or to support other robotic tasks, be it indoor, e.g. mobile manipulation [12],
patrolling and coverage [13], people detection [14], human-robot interaction [15],
or outdoor tasks, e.g. search and rescue [16], humanitarian demining [17], recon-
naissance and exploration [18], UAV surveying [19], structure reconstruction [20],
etc.
Initial work in robotic 3D mapping focused on 3D reconstruction of unknown envi-
ronments using data acquired by panning and/or tilting Laser Range Finders [21], and
stereo cameras [23]. The main focus was placed in configuring and tracking the sen-
sors to acquire depth data while precisely estimating sensor’s motion, also applying
geometrical corrections, and devising efficient methods for registering and match-
ing consecutive sensor data to reconstruct the environment through the extraction of
geometrical features from range sensing.
In [22], a robot is equipped with a forward-looking laser for concurrent mapping
and localization in 2D, and an upward-pointed laser to build a 3D map of the environ-
ment. The approach filters outliers based on distance to generate compact 3D maps,
and a simplification process is used for real-time rendering by fusing look-alike poly-
gons, thus reducing complexity. A low cost 3D laser range finder system is proposed
in [24], by controlling a 2D range finder with a servo motor. A digital camera for
texture mapping is mounted on top of the laser range finder. While rotating, several
pictures are taken so that texture mapping can be applied.
Following these initiatives, several authors moved to 3D model construction and
mapping in outdoor environments by matching aerial photographs with ground level
laser scans via Markov localization [25], or combining dual laser systems for 3D
data collection [26, 27]. Moreover, Extended Kalman Filter (EKF) techniques for
3D positioning tracking have also been applied for 3D environment reconstruction
in diverse works using rotating laser-scanning setups [29, 30]. These works place
important focus on data acquisition, surface segmentation, feature extraction, point
cloud registration and fusion, and the state model for localization. A generated 3D
reconstruction of an office environment in [30] is illustrated in Fig. 1a.
Rocha et al. [31] proposed building 3D volumetric maps using cooperative mobile
robot teams equipped with stereo-vision range sensors. The problem is addressed
using a probabilistic approach based on information theory. Robots cooperate through
efficient information sharing, and an explicit representation of uncertainty is defined
via the maps entropy. A volumetric map obtained in [31] is presented in Fig. 1b. Using
a similar principle, an Entropy Minimization SLAM system for creating dense 3D
visual maps of underwater environments is proposed in [32]. The approach exploits
dense information coming from a stereo system, and performs robust ego-motion
estimation and global-rectification.
Full 6DoF SLAM has been studied more recently [33, 34]. In these works, the
authors focus on estimating the 6 degrees-of-freedom (DoF) pose, namely the three
[email protected]
6 D. Portugal et al.
(a) (b)
Cartesian coordinates and the three Euler angles. Precise Iterative Closest Point
(ICP)-based scan matching and efficient simplification methods (e.g. Taylor expan-
sion and Cholesky decomposition) are proposed to deal with the complexity and
resulting non-linearities of these approaches, leading to globally consistent 3D rep-
resentations of outdoor areas. Additionally, a robust 3D SLAM system applicable
to non-textured environments is proposed in [35], based only on a stereo camera.
By aligning edge points between frames via the ICP algorithm, the system is able to
reliably build detailed 3D maps even under noisy conditions.
Initial work with time-of-flight (ToF) cameras focused on the fusion of low
resolution data from Photonic Mixing Devices (PMD) with high resolution RGB
images [36, 37] to generate appropriate depth maps. These approaches allowed
overcoming the limitations of the first generation of ToF cameras, such as noisy
readings, poor performance on textured scenes, low resolution, restricted field of
view (FoV), and lack of colored information. Following these initial works, several
authors equipped robots with ToF cameras for pose estimation, 3D perception and
map building [38–40], proposing different calibration and filtering techniques, as
well as ego motion estimation approaches and data fusion with stereo or spherical
cameras, even endowing robots with planar mirrors to overcome the limited FoV of
ToF cameras [41].
With the boom of motion sensing devices, such as the Microsoft Kinect or the Asus
Xtion, RGBD cameras became easily available for roboticists worldwide, and several
approaches for 3D mapping using these cameras were presented [42, 43]. These
works provide full 3D mapping systems, estimating the camera’s motion through
spacial features and pose optimization, thus generating occupancy voxel grid models
that are globally consistent. The wide availability of RGBD cameras also allowed
to build 3D maps using UAVs [44] and ground robots [45]. Moreover, Hornung
et al. [46] proposed a key contribution for the 3D mapping literature, by presenting
a method to keep compact 3D volumetric models, called OctopMap. The approach
is based on octrees for spatial subdivision in 3D and uses probabilistic occupancy
[email protected]
A Guide for 3D Mapping with Low-Cost Sensors Using ROS 7
estimation, allowing to update the representation efficiently and modeling the data
consistently, while keeping the memory requirements low.
ROS supports many 3D mapping systems, such as RGBD-SLAM [47],3 Octomap
Mapping [46],4 ORB-SLAM2 [48],5 LSD-SLAM [49],6 ETHZ ASL ICP Map-
ping [50],7 Carthographer [51],8 HDL Graph SLAM [52],9 LOAM [53],10 or
DSO [54].11 In the next section, we present the low-cost sensors and 3D mapping
approach used in this guide.
3 Preliminaries
3 http://wiki.ros.org/rgbdslam.
4 http://wiki.ros.org/octomap.
5 https://github.com/ethz-asl/orb_slam_2_ros.
6 https://github.com/tum-vision/lsd_slam.
7 http://wiki.ros.org/ethzasl_icp_mapper.
8 http://wiki.ros.org/cartographer.
9 https://github.com/koide3/hdl_graph_slam.
10 https://github.com/daobilige-su/loam_velodyne.
11 https://github.com/JakobEngel/dso_ros.
12 https://www.xbox.com/xbox-one/accessories/kinect.
13 https://www.slamtec.com/en/Lidar/A2.
[email protected]
8 D. Portugal et al.
(a) (b)
Fig. 2 Sensors used in the 3D indoor Mapping system of this work. a Kinect v2. b RPLIDAR A2
14 http://introlab.github.io/rtabmap.
15 http://wiki.ros.org/rtabmap_ros.
16 http://wiki.ros.org/hector_mapping.
[email protected]
A Guide for 3D Mapping with Low-Cost Sensors Using ROS 9
To install and run the proposed system, we will assume that the reader has basic Linux
and ROS knowledge, especially regarding ROS launch XML files,17 Transforms,18
and basic understanding of how RTAB-Map and Hector Mapping work. We also
assume that a PC and the two sensors are available, interconnected as presented in
Fig. 3. Please follow carefully the installation process of all the open source tools:
1. Install the latest Ubuntu 16.04 Linux OS LTS Version, available at:
http://ftp.dei.uc.pt/pub/linux/ubuntu/releases/16.04
2. After installing the OS, install the Ubuntu updates by typing in the terminal:
$ sudo apt upgrade
3. Install ROS Kinetic Kame, following the instructions at:
http://wiki.ros.org/kinetic/Installation/Ubuntu
4. Setup your ROS catkin workspace, by typing in the terminal:
$ mkdir -p ∼/catkin_ws/src
$ cd ∼/catkin_ws
$ catkin_make
$ source devel/setup.bash
open your bash configuration file (at /home/$USER/.bashrc):
$ gedit ∼/.bashrc
and add the following two lines at the end:
source ∼/catkin_ws/devel/setup.bash
export ROS_WORKSPACE=∼/catkin_ws
17 http://wiki.ros.org/roslaunch/XML.
18 http://wiki.ros.org/tf.
[email protected]
10 D. Portugal et al.
19V 12V
12V
We will start by connecting the sensors, and then launch the software. In case the
reader is not able to run the system for any reason, e.g. no access to the sensors, we
also provide a ROS bag dataset, which can be used to run the software offline.
In Fig. 4 we present our experimental setup. We have used a pushcart base to drive
the sensors around, a lead-acid battery powering the Kinect v2 and the NUC mini-
PC, which runs Ubuntu and ROS. The Kinect 2 is connected to the PC using a USB
3.0 port, and the RPLIDAR A2 can be powered by any USB port of the PC. Using
a similar setup to the one described here, please follow these steps to expose sensor
data into ROS:
[email protected]
A Guide for 3D Mapping with Low-Cost Sensors Using ROS 11
(a) (b)
Fig. 4 Experimental setup for 3D indoor mapping. a Real experimental setup. b Visualization in
r vi z
19 https://github.com/robopeak/rplidar_ros/wiki.
20 https://github.com/ingeniarius-ltd/rtabmap_utils.
[email protected]
12 D. Portugal et al.
<!-- Adjust the transformation between your kinect2 and the RPLIDAR
laser -->
<!-- In a perfect aligned world, ypr would be: "-1.57 0.0 -1.57" -->
<node pkg="tf" type="static transform publisher" name="static tf kinect2
rplidar" args="0.125 0.09 -0.08 -1.5 0.0 -1.34 laser kinect2 link 100"
respawn="true"/> <!-- 10Hz -->
<!-- Do not change. Align "kinect2 laser link" with "laser" by inverting
the axis of "kinect2 link" -->
<node pkg="tf" type="static transform publisher" name="static tf kinect2
inverted link" args="0.0 0.0 0.0 1.57 -1.57 0.0 kinect2 link
kinect2 laser link 100" respawn="true"/> <!-- 10Hz -->
</launch>
Fig. 5 Static transform between the RPLIDAR A2 and Kinect 2 links, and inversion of the Kinect
v2 link
</launch>
Fig. 6 Running the Kinect v2, RPLIDAR A2 and the static transforms in a single ROS Launch file
[email protected]
A Guide for 3D Mapping with Low-Cost Sensors Using ROS 13
Now that the sensors’ data are being published in ROS, it is time to run the two
algorithms: Hector Mapping to extract a consistent odometry estimation that will
feed RTAB-Map, which in turn will create the 3D representation of the environment.
We have prepared yet another launch file, called rtabmap.launch with the
content of Fig. 7. There are several important aspects in this launch file that are
worth mentioning. Firstly, we set the pub_map_scanmatch_transform and
the pub_odometry parameters of Hector Mapping to true to generate the
intended odometry estimates extracted by matching consecutive laser scans. We
also instruct Hector Mapping to acquire scan data in the topic /scan which is pub-
lished by the RPLIDAR A2 driver. On the RTAB-Map side of things, we remap the
odometry input topic to /scanmatch_odom, which is where the Hector Mapping
algorithm publishes the odometry information. Moreover, we also remap the input
RGB image, Depth image and camera information topics to the ones published by
the Kinect v2 driver, thus appropriately feeding RTAB-Map with information coming
from the Kinect v2. For detailed information on the available parameters for each
algorithm, the interested reader should refer to the ROS Wiki.21,22,23
Besides the two algorithms, we also run rviz—the ROS visualization tool24 —to
analyze the data and troubleshoot.
To start the system, assuming that run_kinect_and_rplidar.launch is
already running (step 5 in Sect. 5.1), we just have to type the following command in
a new terminal:
$ roslaunch rtabmap_utils rtabmap.launch
A new window with rviz will pop up, and you should see something similar to
Fig. 8. You are now ready to map the environment.
In our experimental setup (cf. Fig. 4), we have used a pushcart with the sensors
mounted on top, and we have pushed the cart to map a large indoor area with a long
corridor. In case the reader does not have access to a similar setup, we have recorded
our sensor data into a ROS bag dataset, which can be played back in ROS. In the
next lines, we provide the instructions to run the software on top of our dataset. For
this task you will need at least 30 GB of free space in your system, and the first three
steps may take a fair amount of time:
21 http://wiki.ros.org/hector_mapping#Parameters.
22 http://wiki.ros.org/rtabmap_ros#Parameters.
23 http://wiki.ros.org/rtabmap_ros/Tutorials/Advanced%20Parameter%20Tuning.
24 http://wiki.ros.org/rviz.
[email protected]
14 D. Portugal et al.
<!-- rtabmap.launch:
Launch RTAB-map with Kinect2 and RPLidar A2 -->
<launch>
<!-- RTAB-Map: get a consistent 3D Map fed by the Hector odometry -->
<group ns="rtabmap">
<node pkg="rtabmap ros" type="rtabmap" name="rtabmap" output="screen"
args="--delete db on start">
<param name="subscribe depth" value="true" />
<param name="subscribe scan" value="true" />
<param name="frame id" value=" (arg frame id)" />
<param name="cloud decimation" value="2" />
<param name="cloud max depth" value="5.0" />
<param name="cloud voxel size" value="0.01" />
<param name="map cleanup" value="false" />
Fig. 7 ROS Launch file for running RTAB-map and Hector Mapping with a Kinect v2 RGB-D
sensor and a RPLIDAR A2 laser range scanner
[email protected]
A Guide for 3D Mapping with Low-Cost Sensors Using ROS 15
</launch>
Fig. 7 (continued)
[email protected]
16 D. Portugal et al.
6. Finally launch the file created to run the 3D indoor mapping algorithm on top of
the dataset:
$ roslaunch rtabmap_utils rtabmap_with_bag.launch
Again, a new window with rviz will pop up (similarly to Fig. 8) and you will see
the environment in the dataset being progressively mapped.
In Fig. 9, we illustrate some results of the 3D map built using our experimental setup
and the approach described in this paper. As can be seen in the pictures and the
video of the experiment, the system is able to build a consistent and detailed 3D map
of the environment, without ever losing track of the sensors’ pose. In Fig. 10, we
present the coordinate frames that are used in ROS and their relation, while running
the system. Note that /map is the global frame used for building the 3D map,
25 This will replace step 5 in Sect. 5.1 by publishing sensor data and the transforms.
[email protected]
A Guide for 3D Mapping with Low-Cost Sensors Using ROS 17
Fig. 9 Illustrations of the 3D map in different areas of the building. Full video available at: https://
www.youtube.com/watch?v=MlrcTyXy5No
and another command to convert the PCD file into a PLY file (if needed):
$ pcl_pcd2ply <input_file>.pcd <output_file>.ply
26 http://www.meshlab.net.
[email protected]
18 D. Portugal et al.
map
Broadcaster: /rtabmap/rtabmap
Average rate: 20.241 Hz
Most recent transform: 1484305948.399 ( -0.058 sec old)
Buffer length: 4.891 sec
hector_map
Broadcaster: /hector_mapping
Average rate: 10.673 Hz
Most recent transform: 1484305948.238 ( 0.102 sec old)
Buffer length: 4.872 sec
laser
Broadcaster: /play_1516643473548391540
Average rate: 10.200 Hz
Most recent transform: 1484305948.380 ( -0.039 sec old)
Buffer length: 4.804 sec
kinect2_link
kinect2_rgb_optical_frame kinect2_laser_link
Broadcaster: /play_1516643473548391540
Average rate: 99.166 Hz
Most recent transform: 1484305948.344 ( -0.003 sec old)
Buffer length: 4.951 sec
kinect2_ir_optical_frame
It is our belief that robotics should be accessible to all, and innovation and advance-
ments do not necessarily come only from the robotics research community, but also
from the potential of hobbyists and the Do It Yourself (DIY) community, when pro-
vided with the right tools. For this reason, this paper presents a thorough educational
guide that benefits from open source software and low-cost sensors, below the $500
price tag, for building a consistent 3D map of the environment, as the main application
target.
We provide this educational guide in the hope that it can be useful for learn-
ing/teaching robotics, and to get more acquainted with ROS, existing perception
sensors such as RGB-D cameras and LRFs, SLAM algorithms and open source
projects in general. Furthermore, we provide a large dataset with sensor data, allow-
ing anyone without access to these sensors to build their work on top of them, which
can be used for mapping, as well as localization algorithms, scene and object detec-
tion and recognition, and much more.
[email protected]
A Guide for 3D Mapping with Low-Cost Sensors Using ROS 19
In the future, we have plans to provide a similar dataset with several more sen-
sors, by adding two more laser range finder devices (Hokuyo URG-04LX and SICK
LMS500), and two additional RGB-D cameras (Orbbec Astra and Stereolabs ZED),
allowing for direct comparison of the sensors under the same conditions.
Acknowledgements We are sincerely thankful for the contributions on the free and open-source
frameworks adopted in this work, particularly: Stefan Kohlbrecher for his work on Hector Mapping,
Matthieu Labbé for his work on RTAB-Map, and Thiemo Wiedemeyer for his work on the iai_kinect2
ROS driver.
This work was supported by the Seguranças robóTicos coOPerativos (STOP) research project (ref.
CENTRO-01-0247-FEDER-017562), co-funded by the “Agência Nacional de Inovação” within the
Portugal2020 programme.
References
1. Araújo, A., Portugal, D., Couceiro, M.S., Rocha, R.P.: Integrating Arduino-based educational
mobile robots in ROS. J. Intell. Robot. Syst., Spec. Issue Auton. Robot. Syst. 77(2), 281–298.
Springer (2015)
2. Bradski, G., Kaehler, A.: Learning OpenCV: Computer Vision with the OpenCV library.
O’Reilly Media, Inc. (2008)
3. Noori, F.M., Portugal, D., Rocha R.P., Couceiro, M.S.: On 3D simulators for multi-robot sys-
tems in ROS: MORSE or Gazebo? In: Proceedings of the 2017 IEEE International Symposium
on Safety, Security, and Rescue Robotics (SSRR 2017), Shanghai, China, 11–13 October 2017
4. Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Berger, E., Wheeler, R.,
Ng, A.Y.: ROS: an open-source robot operating system. In: Proceedings of the 2009 IEEE
International Conference on Robotics and Automation (ICRA 2009) Workshop on Open Source
Software, Kobe, Japan, 12–17 May 2009, vol. 3, no. 3.2 (2009)
5. Machado Santos, J., Portugal, D., Rocha, R.P.: An evaluation of 2D SLAM techniques available
in robot operating system. In: Proceedings of the 2013 IEEE International Symposium on
Safety, Security and Rescue Robotics (SSRR 2013), Linköping, Sweden, 21–26 October 2013
6. Rusu, R., Cousins, S.: 3D is here: point cloud library (PCL). In: Proceedings of the 2011 IEEE
International Conference on Robotics and Automation (ICRA 2011), Shanghai, China, 9–13
May 2011
7. Thrun, S., Fox, D., Burgard, W., Dellaert, F.: Robust monte carlo localization for mobile robots.
Artif. Intell. (AI) 128(12), 99–141 (2000)
8. Moore, T., Stouch, D.: A generalized extended Kalman filter implementation for the robot oper-
ating system. In: Proceedings of the 13th International Conference on Intelligent Autonomous
Systems (IAS 2013), Advances in Intelligent Systems and Computing, vol. 302, pp. 335–348,
July 2014. Springer (2014)
9. Marder-Eppstein, E., Berger, E., Foote, T., Gerkey, B., Konolige, K..: The office marathon:
robust navigation in an indoor office environment. In: Proceedings of the 2010 IEEE Inter-
national Conference on Robotics and Automation (ICRA 2010), Anchorage, AK, USA, pp.
300–307, May 2010
10. Hsiao, K., Chitta, S., Ciocarlie, M., Jones, E.G.: Contact-reactive grasping of objects with
partial shape information. In: Proceedings of the 2010 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2010), pp. 1228–1235, October 2010
11. Bouchier, P.: Embedded ROS. IEEE Robot. Autom. Mag. 20(2), 17–19 (2013)
12. Faria, D., Martins, R., Lobo, J., Dias, J.: Extracting data from human manipulation of objects
towards improving autonomous robotic grasping. Robot. Auton. Syst. 60(3), 396–410. Elsevier
(2012)
[email protected]
20 D. Portugal et al.
13. Portugal, D., Couceiro, M.S. Rocha, R.P.: Applying Bayesian learning to multi-robot patrol.
In: Proceedings of the 2013 International Symposium on Safety, Security and Rescue Robotics
(SSRR 2013), Linköping, Sweden, 21–26 October 2013
14. Sales, F.F., Portugal, D., Rocha, R.P.: Real-time people detection and mapping system for a
mobile robot using a RGB-D sensor. In: Proceedings of the 11th International Conference
on Informatics in Control, Automation and Robotics (ICINCO 2014), Vienna, Austria, 1–3
September 2014
15. Alami, R., Albu-Schaeffer, A., Bicchi, A., Bischoff, R., et al.: Safe and dependable physical
human-robot interaction in anthropic domains: state of the art and challenges. In: Proceedings
of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS
2006), Workshop on Physical Human-Robot Interaction in Anthropic Domains, Beijing, China,
October 2006
16. Rocha, R.P., Portugal, D., Couceiro, M., Araújo, F., Menezes, P., Lobo, J.: The CHOPIN project:
cooperation between Human and rObotic teams in catastroPhic INcidents. In: Proceedings of
the 2013 International Symposium on Safety, Security and Rescue Robotics (SSRR 2013),
Linköping, Sweden, 21–26 October 2013
17. Portugal, D., Marques L., Armada, M.: Deploying field robots for humanitarian demining:
challenges, requirements and research trends. In: Mobile Service Robotics: 17th International
Conference on Climbing and Walking Robots (CLAWAR 2014) and the Support Technologies
for Mobile Machines, pp. 649–656. World Scientific Publishing (2014)
18. Couceiro, M.S., Portugal, D.: Swarming in forestry environments: collective exploration and
network deployment. In: Swarm Intelligence—From Concepts to Applications, IET (2017)
19. Siebert, S., Teizer, J.: Mobile 3D mapping for surveying earthwork projects using an unmanned
aerial vehicle (UAV) system. Autom. Constr. 41, 1–14. Elsevier (2014)
20. Montemerlo, M., Thrun, S.: Large-scale robotic 3-D mapping of urban structures. In: Experi-
mental robotics IX, pp. 141–150. Springer, Berlin, Heidelberg (2006)
21. Sequeira, V., Gonçalves, J.G.M., Ribeiro, M.I.: 3D environment modelling using laser range
sensing. Robot. Auton. Syst. 16, 81–91 (1995)
22. Thrun, S., Burgard, W., Fox, D.: A real-time algorithm for mobile robot mapping with appli-
cations to multi-robot and 3D mapping. In: Proceedings of the IEEE International Conference
on Robotics and Automation (ICRA 2000), San Francisco, CA, USA, 24–28 April 2000
23. Kanade, T., Yoshida, A., Oda, K., Kano, H., Tanaka, M.: A stereo machine for video-rate
dense depth mapping and its new applications. In: Proceedings of the IEEE Computer Society
Conference on Computer Vision and Pattern Recognition (CVPR 1996), pp. 196–202, June
1996
24. Surmann, H., Lingemann, K., Nüchter, A., Hertzberg, J.: A 3D laser range finder for autonomous
mobile robots. In: Proceedings of the 32nd International Symposium on Robotics (ISR 2001),
pp. 153–158, 19–21 April 2001
25. Frueh, C., Zakhor, A.: 3D model generation for cities using aerial photographs and ground
level laser scans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2001), vol. 2, no. 2, pp. 31–38, Kauai, USA (2001)
26. Brenneke, C., Wulf, O., Wagner, B.: Using 3D laser range data for SLAM in outdoor environ-
ments. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS 2003), Las Vegas, NV, USA, 27–31 October 2003
27. Howard, A., Wolf, D.F., Sukhatme, G.S.: Towards 3D mapping in large urban environments.
In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS), pp. 419–424, Sendai, Japan, September 2004
28. Cole, D.M., Newman, P.M.: Using laser range data for 3D SLAM in outdoor environments. In:
Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2006),
Orlando, FL, USA, 15–16 May 2006
29. Kohlhepp, P., Pozzo, P., Walther, M., Dillmann, R.: Sequential 3D-SLAM for mobile action
planning. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS 2004), Sendai, Japan, 28 September–2 October 2004
[email protected]
A Guide for 3D Mapping with Low-Cost Sensors Using ROS 21
30. Weingarten, J., Siegwart, R.: 3D SLAM using planar segments. In: Proceedings of the IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS 2006), Beijing, China, 9–15
October 2006
31. Rocha, R., Dias, J., Carvalho, A.: Cooperative multi-robot systems: a study of vision-based
3-D mapping using information theory. Robot. Auton. Syst. 53(3–4), 282–311 (2005)
32. Sáez, J.M., Hogue, A., Escolano, F., Jenkin, M.: Underwater 3D SLAM through entropy min-
imization. In: Proceedings of the IEEE International Conference on Robotics and Automation
(ICRA 2006), Orlando, FL, USA, 15–19 May 2006
33. Nüchter, A., Lingemann, K., Hertzberg, J.: 6D SLAM—mapping outdoor environments. J.
Field Robot. 24(8–9), 699–722 (2007)
34. Borrmann, D., Elseberg, J., Lingermann, K., Nüchter, A., Hertzberg, J.: Globally consistent
3D mapping with scan matching. Robot. Auton. Syst. 56, 130–142 (2008)
35. Tomono, M.: Robust 3D SLAM with a stereo camera based on an edge-point ICP algorithm. In:
Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2009),
Kobe, Japan, 12–17 May 2009
36. Lindner, M., Kolb, A., Hartmann, K.: Data-fusion of PMD-based distance-information and
high-resolution RGB-images. In: Proceedings of the IEEE International Symposium on Signals,
Circuits and Systems (ISSCS 2007), Iasi, Romania, 13–14 July 2007
37. Zhu, J., Wang, L., Yang, R., David, J.: Fusion of time-of-flight depth and stereo for high
accuracy depth maps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2008), Anchorage, AK, USA, 23–28 June 2008
38. Prusak, A., Melnychuk, O., Roth, H., Schiller, I., Koch, R.: Pose estimation and map building
with a PMD-camera for robot navigation. Int. J. Intell. Syst. Technol. Appl. 5(3–4), 255–264
(2008)
39. May, S., Droeschel, D., Holz, D., Fuchs, S., Malis, E., Nüchter, A., Hertzberg, J.: Three-
dimensional mapping with time-of-flight cameras. J. Field Robot. 26(11–12), 934–965 (2009)
40. Holz, D., Droeschel, D., Behnke, S., May, S., Surmann, H.: Fast 3D perception for collision
avoidance and SLAM in domestic environments. Mob. Robot. Navig. 53–84. InTechOpen
(2010)
41. Pirker, K., Rüther, M., Bischof, H., Schweighofer, G., Mayer, H.: An omnidirectional time-of-
flight camera and its application to indoor SLAM. In: Proceedings of the 2010 International
Conference on Control Automation Robotics & Vision (ICARCV 2010), Singapore, 7–10
December 2010
42. Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D.: RGB-D mapping: using kinect-style depth
cameras for dense 3D modeling of indoor environments. Int. J. Robot. Res. 31(5), 647–663
(2012)
43. Endres, F., Hess, J., Engelhard, N., Sturm, J., Cremers, D., Burgard, W.: An evaluation of the
RGB-D SLAM system. In: Proceedings of the IEEE International Conference on Robotics and
Automation (ICRA 2012), Saint Paul, MN, USA, 14–18 May 2012
44. Bachrach, A., Prentice, S., He, R., Henry, P., Huang, A.S., Krainin, M., Maturana, D., Fox, D.,
Roy, N.: Estimation, planning, and mapping for autonomous flight using an RGB-D camera in
GPS-denied environments. Int. J. Robot. Res. 11(31), 1320–1343 (2012)
45. Oliver, A., Kang, S., Wünsche, B. C., MacDonald, B.: Using the Kinect as a navigation sensor
for mobile robotics. In: Proceedings of the 27th Conference on Image and Vision Computing
New Zealand (IVCNZ 2012), Dunedin, New Zealand, 26–28 November 2012
46. Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C., Burgard, W.: OctoMap: an efficient
probabilistic 3D mapping framework based on octrees. Autono. Robots 34(3), 189—206 (2013)
47. Endres, F., Hess, J., Sturm, J., Cremers, D., Burgard, W.: 3D Mapping with an RGB-D camera.
IEEE Trans. Robot. 30(1), 177-187 (2014)
48. Mur-Artal, R., Tarós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo
and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)
49. Engel, J., Schöphs, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In:
European Conference on Computer Vision (ECCV 2014), Lecture Notes in Computer Science,
vol. 8690, pp. 834–849. Springer (2014)
[email protected]
22 D. Portugal et al.
50. Pomerleau, F., Colas, F., Siegwart, R., Magnenat, S.: Comparing ICP variants on real-world
data sets. Auton. Robots 34(3), 133–148 (2013)
51. Hess, W., Kohler, D., Rapp, H, Andor, D.: Real-time loop closure in 2D LIDAR SLAM. In:
Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2016),
pp. 1271–1278, Stockholm, Sweden, 16–21 May 2016
52. Koide, K., Miura, J., Menegatti, E.: A portable 3D LIDAR-based system for long-term and
wide-area people behavior measurement. IEEE Trans. Hum.-Mach. Syst. (under review)
53. Zhang, Z., Singh, S.: LOAM: lidar odometry and mapping in real-time. In: Robotics: Science
and Systems Conference (RSS 2014), Berkeley, CA, July 2014
54. Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach.
Intell. 40(3), 611–625 (2018)
55. Fankhauser, P., Bloesch, M., Rodriguez, D., Kaestner, R., Hutter, M., Siegwart, R.: Kinect
v2 for mobile robot navigation: evaluation and modeling. In: Proceedings of the 2015 IEEE
International Conference on Advanced Robotics (ICAR 2015), pp. 388–394, Istanbul, Turkey,
27–31 July 2015
56. Vasquez, H., Vargas, H.S., Sucar, L.E.: Using gestures to interact with a service robot using
Kinect 2. Res. Comput. Sci. 96, 85–93 (2015)
57. dos Santos, D.R., Basso, M., Khoshelham, K., de Oliveira, E., Pavan, N., Vosselman, G.:
Mapping indoor spaces by adaptive coarse-to-fine registration of RGB-D data. IEEE Geosci.
Remote Sens. Lett. 13(2), 262–266 (2016)
58. Singhania, P., Siddharth, R.N., Das, S., Suresh, A.K.: Autonomous navigation of a multirotor
using visual odometry and dynamic obstacle avoidance. In: Proceedings of the 2017 IARC
Symposium on Indoor Flight Issues, Beijing, China, September 2017
59. Dedek, J., Golembiovsky, M., Slanina, Z.: Sensoric system for navigation of swarm robotics
platform. In: Proceedings of the 2017 18th IEEE International Carpathian Control Conference
(ICCC 2017), pp. 429–433, Sinaia, Romania, 28–31 May 2017
60. Labbé, M., Michaud, F.: Online global loop closure detection for large-scale multi-session
graph-based SLAM. In: Proceedings of the 2014 IEEE/RSJ International Conference on Intel-
ligent Robots and Systems (IROS 2014), pp. 2661–2666, Chicago, IL, USA, 14–18 September
2014
61. Labbé, M., Michaud, F.: Appearance-based loop closure detection for online large-scale and
long-term operation. In: IEEE Trans. Robot. 29(3), 734–745 (2013)
62. Shi, J., Tomasi, C.: Good features to track. In: Proceedings of the IEEE Computer Society
Conference on Computer Vision and Pattern Recognition (CVPR 1994), pp. 593–600 (1994)
63. Labbé, M., Michaud, F.: RTAB-map as an open-source Lidar and visual SLAM library for
large-scale and long-term online operation. J. Field Robot. (2018)
64. Kohlbrecher, S., Meyer, J., Von Stryk, O., Klingauf, U.: A flexible and scalable SLAM system
with full 3D motion estimation. In: Proceedings of the 2011 IEEE International Symposium on
Safety, Security and Rescue Robotics (SSRR 2011), pp. 155–160, Kyoto, Japan, 1–5 November
2011
[email protected]
A Guide for 3D Mapping with Low-Cost Sensors Using ROS 23
[email protected]
Path Planning and Following
for an Autonomous Model Car Using
an “Eye in the Sky”
Keywords Autonomous driving · Path planning · Path following · Eye in the sky
Rodrigo Rill-García: This research was supported in part by Consejo Nacional de Ciencia y Tec-
nología (CONACYT).
Marco Morales: This research was supported in part by Asociación Mexicana de Cultura A.C.
[email protected]
26 R. Rill-García et al.
1 Introduction
Autonomous driving is a subject of research interest which has produced results that
have already been applied in successful commercial products (such as the Uber self-
driving cars). Reliability and security are important issues for this kind of systems
that need to be addressed.
An important requirement in robotics is the communication and cooperation
between devices. Under this scope, ROS is a user-friendly robotic middleware for
the large-scale integration of devices to build robotic systems.
This chapter takes this concept in order to suggest an alternative for the current
autonomous driving paradigms where only on-board information is used for naviga-
tion. The key idea consists in creating communication with an aerial camera or “eye
in the sky” (which in real-life scenarios can be provided by one or many drones) to
provide the vehicle additional information about its context, thus improving three
specific tasks: path planning, path following, and obstacle avoiding. Although this
project focuses on a car-like robot, it is important to notice that these tasks can be
translated to any mobile autonomous system such as domestic robots or industrial
transport robots, where the aerial camera condition can be easily satisfied.
To summarize, the current chapter provides the reader a useful platform to test path
planning and following algorithms for car-like robots. Particularly, we will discuss
the next topics:
– A brief discussion of Rapidly-Exploring Random Trees (RRT) for path planning
with nonholonomic robots using RGB images as occupancy maps
– A brief discussion on PID controllers for path following using RBG images as the
source of closed-loop feedback
– Creating and using worlds for Gazebo
– Creating a RGB camera for Gazebo, including a plugin for publishing to ROS
topics
– Getting pose data from Gazebo via ROS topics
– Sending markers to Rviz to display 3D shapes, 3D points, and lines, as well as
tracking a robot in the generated map
– Translating XY coordinates from an image to real-scale 3D points useful for dis-
play
2 Background
[email protected]
Path Planning and Following for an Autonomous Model Car Using an “Eye in the Sky” 27
In this work, we have three controllers available to deal with nonholonomic con-
strains. At each iteration of the algorithm, one of these three controllers is chosen at
random to expand the tree to a random new node. These controllers are:
– Straight forward steering
– Left steering
– Right steering
In the Straight forward steering controller, a random distance is gotten. Using
this distance, a line of that length is drawn from the current position of the vehicle;
however, the first part of that line must be tangent to the car’s current yaw orientation,
to ensure viability of the proposed path. The simplest case from the list is the first
one, as a straight line of length L with a slope equivalent to the robot’s orientation is
drawn.
The Left steering and Right steering controllers are equivalent to each other, but
mirrored. In these cases, in addition to the random distance, a random radius is
calculated. This is because an arc will be drawn instead of a straight line; thus, we
need the center of the circle and the angle of the portion of the circumference in
which the arc is inscribed.
For the three controllers described above, the expected orientation of the vehicle
at the end of the proposed line is the angle corresponding to the tangent line of
the circle at that point (in the case of the straight line, it is considered for practical
purposes as an arc from a circle with infinite1 radius).
This generation of random lines is executed over and over until the goal is reached.
However, two major considerations should be made:
– All these lines are drawn with respect to the local coordinate system of the car;
therefore, they should be later translated to the global system.
– Collision detection is performed by testing intersection between lines and obstacles
in the occupancy map. Thus, lines in collision are discarded.
Once the path is planned, the next task is to follow it. For this purpose, the
car provides control over two Degrees Of Freedom (DOF): velocity and steering.
Under this scheme, three different PID controllers were implemented in parallel:
two for the steering, and one for velocity. PID stands for Proportional, Integral and
Derivative, and those terms are used in a formula like this: contr ol_signal = K p ∗
e + K i ∗ t e + K d ∗ de
dt
, where the K s are tuned by the user, and e is the error
(e = r e f er ence − curr ent_state). This kind of controllers is best suited for linear
systems, and is specially useful when the mathematical model of such systems is
unknown.
The whole path is divided into subpaths, which consist of pairs of points. The
goal from subpath n is the beginning of subpath n+1 . As any curve can be described
as the joint of many short straight lines, each of this subpaths consists of a straight
line from point P1 to point P2 , as seen in Fig. 1. Therefore, the desired output of the
1 Aswe can’t actually use an infinite value, a large number is arbitrarily chosen so that the tangent
looks colinear to the line previously drawn.
[email protected]
28 R. Rill-García et al.
Fig. 1 Graphical representation of the errors calculated for the control law
control loop is the car moving over all these lines until reaching the goal (each line
being a control subtask).
For this specific task, three different errors are defined with respect to a coordinate
system generated by P1 and P2 :
– Velocity error: the distance between the current position of the car and P2 in the X’
axis. This can be treated as the horizontal distance between the car and the desired
point to reach, as seen in Fig. 1.
– Line error: the perpendicular distance between the current position of the car and
the X’ axis. This can be understood as the shortest distance between the car and
the line it should be following, as seen in Fig. 1.
– Orientation error: the angle between the car’s X axis and the X’ axis. This can
be interpreted as how well is the car aligned to the line it must follow, as seen in
Fig. 1. It is calculated using a rotation matrix approach presented by Caubet and
Biggs to work with values in the range [−1, 1] [2]
Basically, the three errors are using the X’ axis as the reference. Further refinement
is done to achieve a better performance, but these three PIDs are the cornerstone of
the control law.
3 Related Work
Talking about path planning, for the 2007 DARPA Urban Challenge, Kuwuata
et al. [3] described a real-time motion planning algorithm, based on the RRT
approach, applicable to autonomous vehicles operating in an urban environment.
Their primary novelty was the use of closed-loop prediction in the framework of
RRT.
When it comes to obstacle avoiding, it is not enough to detect the interference but to
change the plan according to the new context. This is called path deformation (a path,
that has been computed beforehand, is continuously deformed on-line in response
[email protected]
Path Planning and Following for an Autonomous Model Car Using an “Eye in the Sky” 29
to unforeseen obstacles). Kurniawati and Fraichard [4] addressed this problem with
a scheme called 2-STD, which operates in 2 steps: a collision avoidance step and a
connectivity maintenance step ensuring that the deformed trajectory remains feasible.
About the “eye in the sky” paradigm, the researchers in charge of the Emergency
Integrated Lifesaving Lanyard (EMILY) are working with tethered drones to create
an “eye in the sky” combined with onboard thermal sensing to autonomously navigate
EMILY to a cluster of people [5].
A recent work concerning on path planning plus velocity/acceleration planning
was presented by Hu et al. [6]. According to the authors, the highlights of their
proposal are: a method providing an optimal path with an appropriate corresponding
acceleration and speed, and a proposed method for collision risks on single-lane and
multi-lane roads. A remark should be done because their work addresses avoidance
of both static and moving obstacles with a dynamic path planning.
This application doesn’t require any particular package besides the ones installed by
default with ROS Kinetic and the ones from the repository mentioned for this chapter.
However, to be able to control the AutoNOMOS from ROS topics, it is necessary to
build a plugin2 contained in the repository.
A step-by-step guide on how to setup the project can be found in the README
file of the repository. Nevertheless, here we describe the necessary steps section
corresponding to the plugin. In a terminal, access the Gazebo_plugin folder and
enter the following commands:
mkdir b u i l d
cd b u i l d
cmake . . /
make
sudo g e d i t ∼/ . bashrc
The first four commands build the plugin. Once the plugin has been compiled, we
need to tell our system where to find our workspace and the plugin. To do this, add
the next lines at the end of the file opened by the fifth command:
source / path / t o / repo / AutoNOMOS_Stardust / AutoNOMOS_simulation / devel /
s e t u p . bash
e x p o r t LD_LIBRARY_PATH=${LD_LIBRARY_PATH} : / path / t o / repo /
AutoNOMOS_Stardust / Gazebo_plugin / b u i l d
Be sure to replace ‘/path/to/repo’ for the path where you downloaded the repos-
itory. Those lines are defining the path variables for the project and the plugin.
Once this is done, you are ready to build the whole project. To do this, open the
2 For further reference about plugins, it is highly recommended to read this tutorial: http://gazebosim.
org/tutorials?tut=plugins_hello_world.
[email protected]
30 R. Rill-García et al.
Fig. 2 RViz as seen at start up. Note that no relevant information is shown yet
The README file contains also a quick guide for running the project.3 First, open the
˜/AutoNOMOS_Stardust/AutoNOMOS_simulation folder in terminal. This folder is
the workspace used for the project. In different tabs, run the next commands:
roscore
rosrun rviz rviz
r o s l a u n c h skycam skycam . launch
r o s l a u n c h autonomos_gazebo my_world . launch
roscore is used just to start ROS. Then, rosrun rviz rviz opens the 3D visualizer;
once you execute the command, a window like the one in Fig. 2 should be open.
Once you execute roslaunch skycam skycam.launch, the package in charge of
control and visualization starts working. The only visible changes are shown in Fig. 3.
Please be sure to identify the new little white window opened by this command, as
that one is in charge of controlling the moving obstacle in the world Fig. 3.
[email protected]
Path Planning and Following for an Autonomous Model Car Using an “Eye in the Sky” 31
[email protected]
32 R. Rill-García et al.
Fig. 4 This is the Gazebo simulator. Please note the little black object on the blue line above the
map: it is the sky camera
Fig. 5 This is the final setup of our visualizer. As you can notice, no more “No image” labels are
shown, as Rviz is getting the topics it needs from Gazebo
[email protected]
Path Planning and Following for an Autonomous Model Car Using an “Eye in the Sky” 33
Fig. 6 Rviz shows the AutoNOMOS mini following the path it planned to reach the desired goal.
The path is represented by a series of blue points (squares) joined by green lines
Once you were able to test the project, it is time to analyze its parts. Some slight
remarks should be done before starting:
– the packages are written in C++, so you need to be fairly familiar with the language
– the launch files (used to begin our ROS nodes) and the config files for Gazebo
models are written in xml, but only basic knowledge is enough to work with them
(at least for this project)
– the code in charge of creating our Gazebo models is written in the SDF format
(instead of the previously standard URDF4 )
Figure 7 shows the ROS architecture generated for this project, as generated by
ROS itself.
6.1 Creating and Using a New Gazebo World for Our Robot
First we will discuss how to create a new world for our robot and the way we can use
ROS to load the robot in Gazebo within that world (although this second task can be
done with any preexisting world).
4 Further information about both formats and their use can be found here http://gazebosim.org/
tutorials?tut=ros_urdf.
[email protected]
34 R. Rill-García et al.
Fig. 7 \skycam is the node in charge of path planning, path following and obstacle avoidance.
\using_markers allows communication with the 3D visualization tool for ROS called Rviz. \cylin-
der_keyboard allows control over a mobile obstacle and \gazebo monitors communication with
Gazebo
To create a new Gazebo world, you can open the simulator without ROS interven-
tion. Once there, you can either add Gazebo models or simple geometric 3D figures
to build your new world. In Fig. 8 we are showing how to put a dumpster in our
world. As you may notice, the “Insert” tab contains many different models; if you
want to add a new model to the list, you should add its folder in the next location:
/home/user/.gazebo/models (considering a typical installation in Ubuntu). Whenever
you are done building the world for your robot,5 you should click File ⇒ Save World
As. For the purpose of this project, you should save the new world at
∼/ AutoNOMOS_Stardust / AutoNOMOS_simulation / s r c / autonomos_gazebo / worlds
Please be sure the filename looks something like “desired_name.world” (the
.world part is the most important one). This is the name we will use to tell ROS
how to load our world into Gazebo.
To do this, look for the my_world.launch file in the next location:
∼/ AutoNOMOS_Stardust / AutoNOMOS_simulation / s r c / autonomos_gazebo / launch
It should look something like this:
1 <?xml version=" 1.0 " ?>
2 <launch>
3 <!−− We resume t h e l o g i c i n empty_world . launch , changing
only t h e name o f t h e world t o be launched −−>
5 You can read further information about creating Gazebo worlds here: http://gazebosim.org/
tutorials?tut=build_world.
[email protected]
Path Planning and Following for an Autonomous Model Car Using an “Eye in the Sky” 35
Fig. 8 The options to add elements in the world are surrounded in red. Additionally, the location
of the dumpster model in the map is pointed with a red arrow
Look at line 7, as this is the one in charge of telling ROS which world we want
Gazebo to load. Within the repository provided for this chapter, the desired world
[email protected]
36 R. Rill-García et al.
its called robotics.world; therefore, if you want to use your own world, you should
change this filename for the one you want to use.
This launch file is also important because is the one used to load our robots inside
Gazebo. Look at lines 10 and 16: the 2 nodes started with those 2 lines are used to
spawn the AutoNOMOS and the “eye in the sky”, respectively. Please notice that
both are located through a model.sdf file. Just as we did to add a model to the Gazebo
default list, we can add models to our ROS project copying their folders to the next
location:
∼/ AutoNOMOS_Stardust / AutoNOMOS_simulation / s r c / autonomos_gazebo / models
Thus, whenever you want to spawn a new model into your world, you must be
sure to add it in the correct path and write a new “spawn_model” node in your launch
file. However, how to be sure that your spawned model doesn’t collide with an object
in the world? What if you need the robot to spawn into a specific location? Or at a
given angle? The next section sheds some light on these issues.
Once our world is ready, we aim to include our own robots/models there. In the
previous section we discussed how to tell our system we want certain models to be
spawned in the world we prepared for Gazebo, but we must define the way we want
to do that, as for practical purposes this spawn has 6-DOF (Degrees Of Freedom):
X,Y and Z location along with Pitch, Roll and Yaw orientation.
You must remember that in robotics (and pretty much in general) there is not
such a thing as absolute positions or orientations. Every part of a robotic system
is referenced to a global system and/or at least one local system. In this particular
project, it is easier to define our models with respect to a global reference.
Furthermore, our model should be able to interact with the rest of the world. Even
though you are able to develop models from scratch,6 it is recommendable to work
with existing models; that is indeed the approach we took for this project. As we
wanted a camera, the already existing model for the Microsoft Kinect7 was used.
Once the model was acquired, our job is to modify it in order to fulfill our needs.
As for this project, two modifications are needed: we want the camera in the sky
looking downwards and we need the RGB signal to be sent over ROS topics.
The first task takes just a line. Look at the next piece of code from the model.sdf
file in the rgb_cam model folder:
1 <?xml version=" 1.0 " ?>
2 <s d f version=" 1.5 ">
6 You can even contribute these models to the Gazebo Model Database. For further information
you can read the following tutorial: http://gazebosim.org/tutorials?tut=model_contrib&cat=build_
robot.
7 It was obtained from the link provided in this tutorial: http://gazebosim.org/tutorials?tut=ros_
depth_camera&cat=connect_ros.
[email protected]
Path Planning and Following for an Autonomous Model Car Using an “Eye in the Sky” 37
Lines 4 and 5 are in charge of telling Gazebo that our model (which name is
“rgb_cam”) is expected to be static (that is, it shouldn’t move despite forces like
gravity), located at the point (0, 0, 9.17) measured in meters, and orientated by Pitch
= −1.5708, Roll = 1.5708 and Yaw = 0 given in radians. In other words, we want the
camera to remain 9.17 m above the center of the world looking to the ground.
Once the camera is where we need it, we want ROS to acquire data from it. To do
this, we need a plugin; however, unlike the plugin discussed before, you don’t need
to compile this, just add it to your model.sdf file. Look how this works:
26 <s e n s o r type=" camera " name=" sky_camera ">
27 < u p d a t e _ r a t e >30.0< / u p d a t e _ r a t e >
28 <camera name=" rgb_cam ">
29 < h o r i z o n t a l _ f o v >1< / h o r i z o n t a l _ f o v >
30 <image>
31 <width>640< / width>
32 < h e i g h t >640< / h e i g h t >
33 <format>R8G8B8< / format>
34 < / image>
35 <clip>
36 <near>0.02< / near>
37 < f a r >300< / f a r >
38 </ clip>
39 <n o i s e>
40 <type>g a u s s i a n< / type>
41 < !−− Noise i s sampled i n d e p e n d e n t l y per p i x e l on
each frame .
42 That p i x e l ’ s n o i s e value i s added t o each of
i t s color
43 channels , which a t t h a t p o i n t l i e i n t h e
range [ 0 , 1 ] . −−>
44 <mean>0.0 </mean>
45 <stddev >0.007 </ stddev >
46 </ noise >
47 </ camera>
48 <p l u g i n name=" c a m e r a _ c o n t r o l l e r " filename ="
libgazebo_ros_camera . so">
49 <alwaysOn>t r u e </ alwaysOn>
50 <updateRate >0.0 </ updateRate >
51 <cameraName>sky_camera </cameraName>
52 <imageTopicName>image_raw </ imageTopicName>
53 <cameraInfoTopicName >camera_info </
cameraInfoTopicName >
54 <frameName>camera_link </frameName>
55 <hackBaseline >0.07 </ hackBaseline >
56 <d i s t o r t i o n K 1 >0.0 </ d i s t o r t i o n K 1 >
57 <d i s t o r t i o n K 2 >0.0 </ d i s t o r t i o n K 2 >
58 <d i s t o r t i o n K 3 >0.0 </ d i s t o r t i o n K 3 >
59 < d i s t o r t i o n T 1 >0.0 </ d i s t o r t i o n T 1 >
60 < d i s t o r t i o n T 2 >0.0 </ d i s t o r t i o n T 2 >
[email protected]
38 R. Rill-García et al.
Whenever it comes to mobile robotics, pose estimation plays a key role. Work in this
area is extensive, but Gazebo provides us a solution for this task in the simulated
environment. As mentioned before, in robotics there is not such a thing as absolute
positions or orientations; and this is particularly important when it comes to simula-
tions, because there is no way in which the components of the simulated world can
interact if there is not a certainty on the pose of each one of them.
The last point is really important, since Gazebo then knows at every moment the
pose of all the models in the world. The intuitive idea then is to use this knowledge
for our ROS project.9 In this particular project, Gazebo poses are particularly useful
for display purposes rather than actually pose estimation10 ; however, the knowledge
on how to extract poses from Gazebo can be really useful for the reader in many
8 To get a deeper understanding of Gazebo plugins with ROS you can check this tutorial: http://
gazebosim.org/tutorials?tut=ros_gzplugins.
9 When we are doing research or testing submodules of a system, it is often necessary to assume
that some specific problems (like pose estimation in this case) are solved. However, be careful as
this is just an assumption.
10 Instead of using an IMU, Gazebo poses are used to estimate the orientation of the AutoNOMOS.
However this is the only use of Gazebo poses in order to help with navigation.
[email protected]
Path Planning and Following for an Autonomous Model Car Using an “Eye in the Sky” 39
Fig. 9 The models included in the world are indicated with a red key (})
tasks, and that is the motivation for this section. Fortunately, all the information me
need is published by Gazebo to a specific topic: “/gazebo/model_states”.
The first thing to point out is that messages published by Gazebo are, indeed, a
special type of message. In particular for the aforementioned topic, we are dealing
with a message of type gazebo_msgs/ModelStates. By reading the ROS documen-
tation on this type,11 the first thing we should notice is that the message is basically
composed of 3 different messages, from which every one is an array: each element
of the array corresponds to one of the models in our world.
Given this, if you want to get the information of a specific model, you just need
to know its index. This is simple, because the order of the array is the same as the
order in which the models are displayed in Gazebo (look at Fig. 9).
However, given the nature of our project, every time the project is run the order of
the spawned models is prone to change, so it is impossible to hard code the index of
the desired model. In the skycam_nodelet.cpp file of the skycam package, we present
a simple workaround about this; we show you here the function called back anytime
a gazebo_msgs/ModelStates message comes from the “/gazebo/model_states” topic:
717 void Skycam::GetStates ( c o n s t gazebo_msgs::ModelStates &msg)
718 {
719 f o r ( i n t model = 0 ; model < msg . name . s i z e ( ) ; model++)
720 {
721 i f (msg . name [ model ] == " u n i t _ c y l i n d e r _ 0 " )
722 {
723 s t a t e _ . model_name = msg . name [ model ] ;
724 s t a t e _ . pose = msg . pose [ model ] ;
725 s t a t e _ . t w i s t = msg . t w i s t [ model ] ;
726 }
727 i f (msg . name [ model ] == "AutoNOMOS_mini" )
11 http://docs.ros.org/jade/api/gazebo_msgs/html/msg/ModelStates.html.
[email protected]
40 R. Rill-García et al.
728 {
729 float imuX = msg . pose [ model ] . orientation .x;
730 float imuY = msg . pose [ model ] . orientation .y;
731 float imuZ = msg . pose [ model ] . orientation . z ;
732 float imuW = msg . pose [ model ] . o r i e n t a t i o n .w;
733
734 t f : : Q u a t e r n i o n q1 (imuX , imuY , imuZ ,imuW) ;
735 t f : : M a t r i x 3 x 3 m( q1 ) ;
736 Rt_ = m;
737
738 double r o l l , p i t c h , yaw ;
739 m. getRPY ( r o l l , p i t c h , yaw) ;
740 current_yaw_ = f l o a t (yaw) ;
741 / / ROS_INFO_STREAM( "Yaw: " << current_yaw_ ) ;
742 status_ok_ = true ;
743 }
744 }
745 }
In lines 721 and 727 we store the information of interest from the models that
are called “unit_cylinder_0” and “AutoNOMOS_mini”, respectively. We know this
names because that is the way we called those models; if you are in doubt of how you
called them, notice that the names correspond to the way they are called in Gazebo
(refer again to Fig. 9).
If you read carefully, from the AutoNOMOS we are only getting its orientation
(please notice that it is expressed in the incoming message as a quaternion). Why are
we storing information about a cylinder? This is because we need to keep track of
the modifications in the Gazebo world to update the map showed in Rviz. We will
talk next about how to work with Rviz.
[email protected]
Path Planning and Following for an Autonomous Model Car Using an “Eye in the Sky” 41
This topic is enough to send as much different objects as we want, but we must
create a new variable for each one of them. Here is an example from the include file
of the using_markers package12 :
visualization_msgs::Marker unit_box_0_ ;
visualization_msgs::Marker unit_cylinder_0_ ;
visualization_msgs::Marker unit_cylinder_1_ ;
visualization_msgs::Marker unit_box_1_ ;
visualization_msgs::Marker unit_box_2_ ;
visualization_msgs::Marker unit_box_3_ ;
v i s u a l i z a t i o n _ m s g s : : M a r k e r marker_points_ , l i n e _ s t r i p _ ;
Each one of them should be constructed with their own particular attributes before
publishing. For educational purposes, we will construct here the unit_box_0_ mes-
sage as an example for 3D figures. The next sample code is constructed from the
using_markers_nodelet.cpp file in the package:
1 / / S e t t h e frame ID and timestamp . See t h e TF t u t o r i a l s f o r
i n f o r m a t i o n on t h e s e .
2 unit_box_0_ . header . frame_id = " ground_plane : : l i n k " ;
3 unit_box_0_ . header . stamp = r o s : : Time : : now ( ) ;
4
5 / / S e t t h e namespace and i d f o r t h i s marker . This s e r v e s t o
c r e a t e a unique ID
6 / / Any marker s e n t with t h e same namespace and i d w i l l
o v e r w r i t e t h e old one
7 / / %Tag ( NS_ID )%
8 unit_box_0_ . ns = " basic_shapes0 " ;
9 unit_box_0_ . i d = 0 ;
10 / / %EndTag ( NS_ID )%
11
12 / / S e t t h e marker t y p e . This i s , t h e d e s i r e d shape
13 / / %Tag (TYPE)%
14 unit_box_0_ . type = v i s u a l i z a t i o n _ m s g s : : Marker : :CUBE;
15 / / %EndTag (TYPE)%
16
17 / / S e t t h e marker a c t i o n . Options are ADD, DELETE, and new i n
ROS Indigo : 3 (DELETEALL)
18 / / %Tag (ACTION)%
19 unit_box_0_ . a c t i o n = v i s u a l i z a t i o n _ m s g s : : Marker : :ADD;
20 / / %EndTag (ACTION)%
21
22 / / S e t t h e pose o f t h e marker . This i s a f u l l 6DOF pose
r e l a t i v e t o t h e frame / time s p e c i f i e d i n t h e header
23 / / %Tag (POSE)%
24 unit_box_0_ . pose . p o s i t i o n . x = −0.856451;
25 unit_box_0_ . pose . p o s i t i o n . y = −2.31594;
26 unit_box_0_ . pose . p o s i t i o n . z = 0.499799;
27 unit_box_0_ . pose . o r i e n t a t i o n . x = 0 . 0 ;
28 unit_box_0_ . pose . o r i e n t a t i o n . y = 0 . 0 ;
29 unit_box_0_ . pose . o r i e n t a t i o n . z = 0 . 0 ;
12˜/AutoNOMOS_Stardust/AutoNOMOS_simulation/src/using_markers.
[email protected]
42 R. Rill-García et al.
30 unit_box_0_ . pose . o r i e n t a t i o n .w = 1 . 0 ;
31 / / %EndTag (POSE)%
32
33 / / Set the scale of t h e marker −− 1x1x1 here means 1m on a
side
34 / / %Tag (SCALE)%
35 unit_box_0_ . s c a l e . x = 3.07935;
36 unit_box_0_ . s c a l e . y = 1.0;
37 unit_box_0_ . s c a l e . z = 1.0;
38 / / %EndTag (SCALE)%
39
40 / / S e t t h e c o l o r −− be sure t o s e t alpha t o something non−zero
!
41 / / %Tag (COLOR)%
42 unit_box_0 . c o l o r _ . r = 1.0 f ;
43 unit_box_0 . c o l o r _ . g = 0.0 f ;
44 unit_box_0 . c o l o r _ . b = 0.0 f ;
45 unit_box_0 . c o l o r _ . a = 1.0;
46 / / %EndTag (COLOR)%
47
48 / / %Tag (LIFETIME)%
49 unit_box_0 . l i f e t i m e _ = r o s : : Duration ( ) ;
50 / / %EndTag (LIFETIME)%
51 / / %EndTag ( INIT )%
52
53 marker_pub_ . p u b l i s h ( unit_box_0_ ) ;
Most of the code may be self-explanatory, but for further reference you can read
the whole tutorial on the topic in the ROS wiki.13 Nonetheless, there are some par-
ticularities we wish to discuss here.
First, look at line 2. As aforementioned, in robotics there is not a thing such as
absolute positions or orientations; as so, we need to tell our shape what is its point of
reference: that is exactly what we are doing at line 2. Furthermore, when you actually
open Rviz, you may notice an option called “Fixed Frame” in the Displays panel;
this is where we tell Rviz the Gazebo element to be used as reference for our Rviz 3D
map. To avoid further problems, opt to always set the frame_id and the fixed frame
to the same value; in this case, the ground model in our Gazebo simulation.
From lines 8 and 9 we are just giving identifiers to the objects we want to publish
in Rviz; be sure to don’t repeat id values, because otherwise the most recent published
shape with a certain id will replace previous shapes published with the same id.
Line 19 is used to tell Rviz what we want to do with the given shape. As with this
example we want to add shapes to the map, we select the option ADD.
Lines 24–30 and 35–37 are used to define geometry. The values shown in this
code were copied from the sdf file of our Gazebo world, to match the Gazebo world.
However, do you remember the little white window from Fig. 3? It is used to move
one of the red blocks in the Gazebo world and reflect the new position in Rviz. Take
a look at the next piece of code from the using_markers_nodelet.cpp file:
13 http://wiki.ros.org/rviz/Tutorials/Markers%3A%20Basic%20Shapes.
[email protected]
Path Planning and Following for an Autonomous Model Car Using an “Eye in the Sky” 43
14 The available keys are WASD and up, down, left, right.
[email protected]
44 R. Rill-García et al.
Note that only relevant differences from before are that the shape types are now
POINTS and LINE_STRIP, and that no position is given. This two messages are
composed by sets of points and lines, that is why we don’t have an specific position.
As so, we need to define those two sets. However, this is done simply by defin-
ing the desired points (notice that this points are variables of the type geome-
try_msgs::Point). The next function is in charge of catching the points published
by the path planner:
1 void UsingMarkers::NewPoint ( c o n s t geometry_msgs::Point &
new_point )
2 {
3 i f ( new_point . z == 0 . 2 )
4 {
5 p_ . x = new_point . x ;
6 p_ . y = new_point . y ;
7 p_ . z = new_point . z ;
8 marker_points_ . p o i n t s . push_back ( p_ ) ;
9 l i n e _ s t r i p _ . p o i n t s . push_back ( p_ ) ;
10 }
11 e l s e i f ( new_point . z == 0 . 4 )
12 {
13 marker_pub_ . p u b l i s h ( marker_points_ ) ;
14 marker_pub_ . p u b l i s h ( l i n e _ s t r i p _ ) ;
15 }
16 else
[email protected]
Path Planning and Following for an Autonomous Model Car Using an “Eye in the Sky” 45
17 {
18 marker_points_ . p o i n t s . c l e a r ( ) ;
19 line_strip_ . points . clear () ;
20 marker_pub_ . p u b l i s h ( marker_points_ ) ;
21 marker_pub_ . p u b l i s h ( l i n e _ s t r i p _ ) ;
22 }
23 }
According to the z value of the point received, we have three possible actions:
add a new point p to the list of points of both marker messages, publish the markers,
or clean the points of both markers to publish them. And that is it, when you publish
this markers, Rviz will draw all of the points in the points list of the maker_points_
variable, and all the lines connecting two consecutive points in the points list of the
line_strip_ variable. With this knowledge, you can draw as many points or lines you
need according to your project requirements. However, before closing this section,
we want to briefly discuss how to add a tracker for a certain model in the Gazebo
simulation and how to communicate compatible ROS messages from your project to
Rviz visualizations.
In the Displays panel of Rviz, you have an Add button. Once you click it, a menu
will pop up; there, select the Axes option and click Ok. A new Axes object will appear
in the Displays panel. Expand it and look for the Reference Frame option: there you
should select a link of the model to track. With this, the model of interest will be
represented in the Rviz map as coordinate system.
Notice that the Axes option was in the By display type tab. If you want to explore
which topics published in your project are visualizable in Rviz, move to the By topic
tab. There, Rviz will list by type all of your current topics that are compatible with
Rviz. The “Laser”, “RRT”, “FrontCam” and “UpperCam” windows in Fig. 2 were
added this way.
At this point you know how to create Gazebo worlds and models, how to com-
municate with Gazebo from ROS to spawn models and retrieve information, how
to add markers to Rviz for visualization, how to update this markers automatically
when required, how to keep position tracking of models in the Gazebo simulation,
and how to visualize topics of interest in Rviz.
From this point, all what is left is to explain the package in charge of solving the
problem that motivated this project: skycam.
Before discussing the package itself, we will discuss the substeps used to solve the
problem. Basically, the whole code goes around 3 cyclic, sequential states: stand-by,
planning the path, and following the planned path. Those states are repeated over
and over until the project is shut down, but some additional considerations were
added to the work flow to deal with some particular cases. You can see a graphical
representation of the complete workflow in Fig. 10.
[email protected]
46 R. Rill-García et al.
Each one of the states in the graph involves subtasks to be properly performed.
The rest of this section will be destined to briefly explain how this states are executed
in the skycam package.
Remember that this whole project is inspired by the idea of an “eye in the sky”,
being that aerial camera the main source of information for the vehicle. As such, we
want the vehicle to act according to the information sent by the camera. That is why
the state machine depicted in Fig. 10 is contained in the function used as callback
whenever a frame is received from the camera. This function is
void Skycam::SkyImage ( c o n s t sensor_msgs::ImageConstPtr &msg)
Whenever this function is called, the message from the camera is processed to get
the information of interest from the world observed. However, as long as no point has
been published from Rviz, the system won’t do anything besides processing images;
that is why we say the system is in stand-by.
At this point, the goal is defined as the point (−1,−1). When a goal point is
published by Rviz, the next function is called:
1 void Skycam::GoalFromRviz ( c o n s t geometry_msgs::PoseStamped &
goal_rviz )
2 {
3 FinishTrack ( ) ;
4 f l o a t c e n t i m e t e r s _ p e r _ m e t e r = 100;
5 goal_ . x = i n t ( g o a l _ r v i z . pose . p o s i t i o n . x ∗
centimeters_per_meter ∗ pixels_per_centimeter_ +
6 sky_image_ . c o l s / 2) ;
[email protected]
Path Planning and Following for an Autonomous Model Car Using an “Eye in the Sky” 47
7 goal_ . y = i n t ( sky_image_ . c o l s / 2 −
8 g o a l _ r v i z . pose . p o s i t i o n . y ∗
centimeters_per_meter ∗
pixels_per_centimeter_ ) ;
9 ROS_INFO_STREAM( goal_ ) ;
10 state_machine_ = 1 ;
11 }
The function is too long to write it down here, but we will mention some details.
As you may notice, the name of the function is RRT ; this is because the function is
using a RRT algorithm as described in Sect. 2. To execute this algorithm, we need
the map of the world (map_to_analyze), a way to locate the vehicle (car_map), the
goal to reach (goal), and the orientation of the vehicle (car_pose). The other two
parameters are just used for performance purposes.
For the scope of this work, the map_to_analyze is gotten as a pixel-wise occupancy
grid defined by a color segmentation strategy, where any red element in the frame is
identified as an obstacle. In a similar way, the car_map is identified by looking for a
green object in the frame; unlike the map_to_analyze, we don’t create an occupancy
grid but get the centroid of the blob corresponding to the car in the frame. To deal
with illumination issues, the frame is first transformed from RGB to the Lab color
space. Once in the Lab space, a threshold is applied on the second channel for binary
color segmentation (either for red or green).
Notice that the RRT function returns a vector of points; however, this points are
defined over the coordinate system defined by the analyzed image, because that is
what our vehicle can understand from the external eye. Nonetheless, inside the func-
15˜/AutoNOMOS_Stardust/AutoNOMOS_simulation/src/skycam/config/config.yaml.
[email protected]
48 R. Rill-García et al.
tion itself the skycam package is publishing these points as 3D points for visualization
in Rviz; the publisher for this task is path3D_.
Once the path is planned there is no more to do in this state and state_machine_
changes its value to 2. However, sometimes it is actually impossible to find a path
given the current conditions of the world; if that is the case, the RRT returns an empty
path and the object_too_close_ flag is activated (object_too_close_ = tr ue). When
the object_too_close_ flag is true, the car will try to move backwards for a few seconds
and the RRT function will be called again; the only way to deactivate the flag is to
find a non-empty path.
If a non-empty path was found, then the system goes into state 2, that is, following
the path (state_machine_ = 2).
Being in state 2 means we were able to find a valid path, and then we wish to reach
the desired goal following the given path. To this, we implement 2 parallel PID
controllers (a compound one for steering and another one for velocity) as described
in Sect. 2; you can refer to Fig. 11 for a more detailed insight of the control strategy. In
this section, we will just talk briefly about the refinement techniques used to improve
the performance of the system.
Even if the two controllers are able to control position and orientation, we can’t
ignore a key concept as the inertia. If the car is at high speed when reaching the goal,
it will surpass it; if the car tries to turn at high speed, it will most likely lose control.
To work around this, there are two speed restrainers that act over the control law
defined by the PIDs:
Fig. 11 Block diagram describing the overall 3-PIDs control strategy used for path following
[email protected]
Path Planning and Following for an Autonomous Model Car Using an “Eye in the Sky” 49
– The most points of the path the vehicle has reached, the lower the speed limit
– The greater the steering angle, the lower the speed limit
Additionally, let’s consider the case where the car surpassed any given checkpoint:
it is clear that it needs to move backwards. However, think of this case: the car is
in front of the desired point, and a bit to the left of the proposed path. The PID for
the steering will conclude that the wheels need to turn to the right, while the PID
for the velocity will conclude that the car needs to move backwards: even if the car
reaches the desired point, it will be facing a totally misaligned direction. To work
around this, the control law obtained from the PID for steering is multiplied by −0.5
anytime the control law obtained from the PID for velocity is <0. In practice, this
showed to give good results.
As such, whenever the AutoNOMOS reaches the goal point with a certain toler-
ance, the process is considered a success. The goal is defined to be (−1,−1) again,
the goal_selected_ flag becomes false and the state_machine_ variable is returned
to 1. However, what happens if the map changes while the vehicle is following its
path?
In this case, the AutoNOMOS uses its on-board laser scan16 to detect new possible
obstacles: if an object in front of it is close, the system compares the current map
with the one used to plan the route. The laser scan detects objects horizontally all
around the car (360◦ with 1◦ resolution; from 0.08 to 6 m with 1 cm resolution17 ),
but only a portion in front of the car is evaluated; consider this as the field of view of
the vehicle. If the zone evaluated by the scan is considerably different from what it
used to be according to the images, the car stops and it plans a new path to reach the
desired goal. Once the new path is obtained, the AutoNOMOS continues to move
following this new route until it reaches the goal. And, then again, the system returns
to the stand-by state.
7 Conclusions
From the current chapter, the reader was provided with the knowledge to use our
package for autonomous driving using the AutoNOMOS mini model. Particularly,
a parallel use of Gazebo and Rviz along with ROS was used to test a proposed
solution for both tasks of planning and following a path autonomously for semi-
static environments, using an external sensor in the form of an aerial camera (or “eye
in the sky”).
16 For this project, the AutoNOMOS has only two embedded sensors: a frontal color camera and
the above mentioned laser scan. From these, the only one used for navigation is the laser, while the
front cam is only used for visualization purposes.
17 These specifications can be changed within the model.sdf file corresponding
[email protected]
50 R. Rill-García et al.
There are many details about the system that are not described in this chapter; for
a deeper understanding, we invite the reader to look carefully at the code provided in
the repository. However, we think that the information provided in this text is enough
not only to use the project but to modify it and extend it. This last task is of particular
interest, since there is much that can be improved. But, either if it is to extend the
work, to create a new solution using our repository as framework, or simply to learn
more tools for using ROS, we expect our work is helpful for you.
References
1. LaValle, S.M., Kuffner Jr., J.J.: Randomized kinodynamic planning. Int. J. Robot. Res. 20(5),
378–400 (2001)
2. Caubet, A., Biggs, J.D.: An Efficient, Singularity Free Attitude Controller on SO(3). University
of Strathclyde (2013)
3. Kuwata, Y., Teo, J., Fiore, G., Karaman, S., Frazzoli, E., How, J.P.: Real-time motion planning
with applications to autonomous urban driving. IEEE Trans. Control Syst. Technol. 17(5), 1105–
1118 (2009)
4. Kurniawati, H., Fraichard, T.: From path to trajectory deformation. In: 2007 IEEE/RSJ Interna-
tional Conference on Intelligent Robots and Systems, San Diego, CA, 2007, pp. 159–164
5. Ieee-ras.org.: National science foundation video on rescue robotics features IEEE and RAS
Fellow-IEEE robotics and automation Society. http://www.ieee-ras.org/about-ras/latest-news/
1241-national-science-foundation-video-on-rescue-robotics-features-ieee-and-ras-fellow
(2018). Accessed 29 Apr 2018
6. Hu, X., Chen, L., Tang, B., Cao, D., He, H.: Dynamic path planning for autonomous driving on
various roads with avoidance of static and moving obstacles. Mech. Syst. Signal Process. 100,
482–500 (2018)
[email protected]
Path Planning and Following for an Autonomous Model Car Using an “Eye in the Sky” 51
Edgar Granados is a M.Sc. in Computer Science student and part-time professor at the Insti-
tuto Tecnológico Autónomo de México (ITAM). He received a B.S. in Computer Engineering and
a B.S. in Industrial Engineering at ITAM. He received a CONACYT scholarship to pursue his
Masters and spend the summer of 2018 in a research internship at Rugers University PRACSYS
Lab. He has been a member of ITAM’s robotics lab since 2004 where he has worked in multi-
ple projects such as the RoboCup SPL team, the RoboCup SSL team, and the AutoNOMOs car
project. As a member of the lab, he has published at RSS workshops and at COMROB. Also,
he has contributed and participated in robotics tournaments such as the RoboCup and the Mex-
ican Tournament of Robotics. His main research interests are motion planning and probabilistic
robotics.
Marco Morales is an Assistant Professor in the Department of Digital Systems at the Instituto
Tecnológico Autónomo de México (ITAM). His main research interests are in motion planning
and control for autonomous robots. He received a Ph.D. in Computer Science from Texas A&M
University, a M.S. in Electrical Engineering and a B.S. in Computer Engineering from Universidad
Nacional Autónoma de México (UNAM). He was awarded a Fulbright/García Robles scholarship
to pursue his Ph.D., a CONACYT scholarship to pursue his Masters, and was a SuperComputing
Scholar at UNAM. He has been member of the National System of Researchers in Mexico. He has
served as Associate Editor for IEEE ICRA, IEEE/RSJ IROS and for the Robotics and Automation
Letters (RA-L). He has also served as organizer of international and local robotics research and
competition meetings including being one of the chairs of the Eight International Workshop on the
Algorithmic Foundations of Robotics (WAFR) in 2008 and 2018, chair of the Mexican Robotics
Tournament in 2011, and member of the organizing committee of the RoboCup 2012 and sev-
eral editions of the Mexican Robotics Tournament and Mexican School of Robotics. He is Pres-
ident and founder member of the Mexican Federation of Robotics where he has being a member
of the Board of Directors since 2010. He is chair of the Mexican Council of the IEEE Robotics
and Automation Society.