Comparison of Robotic Simulation Environments
Comparison of Robotic Simulation Environments
BACHELOR
Award date:
2022
Link to publication
Disclaimer
This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student
theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document
as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required
minimum study period may vary in duration.
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners
and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
Comparison of
Robotic Simulation Environments
E.J.L. Wolfs
1439537
Supervisor:
Dr. E. Torta
Table of Contents
1 Introduction 1
2 Comparison Setup 3
2.1 Comparison Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Manipulator Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
5 Perception in Gazebo 27
5.1 Use Case Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.2 Gazebo Room with Depth Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.3 2D Binary Occupancy Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.4 3D Occupancy Map via OctoMap Package . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.5 Simulation of IGT Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
i
CHAPTER 1. INTRODUCTION
1 Introduction
In today’s world, robots have become increasingly important, more complex, and applied in many
different fields. Robots are also being given more responsibility. Especially for complex medical robot
applications where reliability and consistency are key. The robot should perform exactly what it is ex-
pected to do. Due to this complexity, optimisations in terms of controller algorithms and software are
key factors to make sure that the robotic system works as intended. Validating these systems on real
hardware is expensive and can therefore be executed less often. To improve this optimisation process
and validate the robot before implementing it on a real-life setup, robotic simulation environments are
used. These environments allow modelling of the robot in a realistic real-world environment, reducing
the time and costs of the design cycle [2]. According to a previous study [8], these environments make
it easier to solve problems with algorithms due to their great controllability, which makes it possible
to reduce the number of factors involved in a simulation, such as friction. Another advantage is the
fact that simulation environments are more predictable compared to the real world. This ensures that
the results of the experiments are always constant.
When validating a robot, it is important to choose a suitable simulation environment, since different
environments offer different built-in features. Due to the increasing number of simulation environ-
ments and features, it is sometimes unclear which environment is best suitable for a specific robotic
simulation task. There is a wide range of different robot types like mobile robots, humanoid robots,
and manipulators. All these types require different simulation capabilities in terms of, for example,
sensor implementation, collision modelling or environment simulation. Several studies have examined
the differences between simulation environments. One of them is a study about the Analysis and
Comparison of Robotics 3D Simulators [3]. This study compares the environments V-Rep, Unity and
Gazebo by focusing mostly on the quality and usability of these environments. This research concluded
that V-Rep has more integrated features, while Gazebo requires more plugins. This type of study gives
already an indication of what environment might be more convenient to use for a specific simulation
scenario.
The Robotics System Toolbox in Matlab provides tools for designing, simulating, and testing the
robots in one integrated environment. To visualise and test these tools including complex integrated
cameras and sensors, a simulation environment is needed. A 3D realistic multibody simulation envir-
onment like Simscape Multibody can be used for this [16]. This physical modelling tool is integrated
into Simulink and automatically generates a 3D animation of the robotic setup. Making it convenient
to directly test algorithms on the robot. MathWorks proves that it is also possible to use an external
robotic simulation environment like Gazebo [12] together with Simulink, making use of co-simulation.
Via co-simulation, Simulink and Gazebo are directly connected and can exchange data. This makes
Simulink not limited to Simscape Multibody only. Based on this fact, the following question can be
asked: "What are the main differences between simulating a robotic scenario in Simscape Multibody
and Gazebo via co-simulation with Simulink?". This gives rise to the main objective of this study:
1. Compare Simscape Multibody and Gazebo based on defined comparison criteria by mak-
ing use of co-simulation with Simulink.
In order to compare the simulation environments systematically, comparison criteria are needed. These
criteria should be drawn up based on of the performance indicators to be evaluated. A suitable scenario
must then be chosen where these criteria can be tested for. The same scenario can then be used for the
simulation environments, to allow for a fair comparison. This gives rise to the following sub-objective
of this study:
1
CHAPTER 1. INTRODUCTION
1a. Define comparison criteria and a scenario in order to compare the simulation envir-
onment based on utility, usability and performance.
One requirement of these environments is that they are able to co-simulate with Simulink. This means
that Simulink should be able to send inputs (for example, position, velocity and torque) and receive
outputs data from the robot (for example, position and sensor data) modelled in a specific simulation
environment. To perform co-simulation, a connection between Simulink and Gazebo needs to be made.
This leads to the following sub-objective of this study:
1b. Establish a connection between Gazebo and Simulink to perform a co-simulation with
the defined scenario.
After comparing the two environments, the advantages of sensor simulation in Gazebo will be further
elaborated based on a use case perspective. This use case is based on the implementation of a Model
Predictive Control (MPC) algorithm for an Image-guided Therapy (IGT) robot. For this project, it is
necessary to simulate synthetic sensor data to estimate the position of obstacles in a room, which can
then be further used for obstacle avoidance. This gives rise to the second main study objective:
2. Investigate how to create a grid map of sensor simulation in Gazebo based on the
requirements for the use case.
In Chapter 2, the comparison criteria are elaborated together with the definition of the scenario.
The co-simulation that is used to simulate the scenario in Gazebo is further explained in Chapter 3.
The comparison between Simscape Multibody and Gazebo is elaborated in Chapter 4 and is divided
into different sections. Examples are demonstrated with the scenario and tests are done to find the
differences between the environments. At the end of each section, a conclusion is made that reflects
the main findings of the scenario. Finally, Chapter 5 describes the perception in Gazebo applied to
the use case. Implementation of 2D and 3D grid maps are explained and tested with the IGT robot.
The report ends with Chapter 6, which provides a conclusion and a recommendation.
2
CHAPTER 2. COMPARISON SETUP
2 Comparison Setup
This chapter describes the comparison criteria that serve as guidelines for comparing the simulation
environments. Based on these criteria, a suitable scenario is chosen which is then used to compare the
environments equally. The scenario description consists of an explanation of the chosen robot model
including a visualisation. Furthermore, the main setup of the Simulink model used for the comparison
is explained.
• Utility: The functionalities that the simulation environment offers in terms of simulation pos-
sibilities. For example, whether it is possible to include built-in sensor models or different scene
objects in the simulation.
• Usability: How well users can execute/develop the functionalities that are possible with the
environment. For example, how convenient it is to install or use the software and what knowledge
is required to accomplish this.
• Performance: Examines the quality of the simulation in the environment. An example of this
could be the efficiency and accuracy of the computations. Specifications such as CPU core usage,
simulation time, and memory usage can be compared.
Based on these three sections, different criteria are chosen to be evaluated. The criteria that form the
basis of the comparison can be seen in Table 2.1.
For this study, the two simulation environments Simscape Multibody and Gazebo will be compared by
using co-simulation with Simulink. The defined comparison criteria form the basis of the comparison
and will be further elaborated in the following sections.
3
CHAPTER 2. COMPARISON SETUP
A robotic manipulator is fixed at the base within an empty environment that includes gravity. The
manipulator consists of four revolute joints and two prismatic joints which can move independently.
The four revolute joints are connected to the arm of the manipulator and the two revolute joints are
connected to the gripper. At the end of the manipulator, an end-effector is attached with a gripper that
can be controlled. This gripper can be closed and opened completely. By making use of co-simulation
between the simulation environment and Simulink, the reference trajectory and controllers are defined
within Simulink. Meanwhile, the joint position data is received from the simulation environment.
To perform an equal comparison, the Simulink models of the manipulator should be the same. In
this model, the trajectory and controllers are defined and will therefore be equal for both simulation
environments. Having the Simulink model, two separate models are made. One contains the plant in
Simscape Multibody and the other consists of a co-simulation connection with Gazebo. The plant is
referred to as the psychical model where the description of the robot is defined and the corresponding
behaviour of the robot is simulated.
4
CHAPTER 2. COMPARISON SETUP
In Figure 2.2 a schematic representation of the comparison setup can be seen when simulating the
OpenManpulator in Simscape Multibody. It can be seen that this environment is located within the
Simulink environment (in Windows) and that no extra connection is needed between the Simulink
model and the simulation environment.
The schematic representation of the OpenManipulator in Gazebo can be seen in Figure 2.3. Gazebo is
installed on a virtual machine running on the Linux operating system (Ubuntu). A connection between
the virtual machine and the Simulink environment is needed to perform co-simulation.
Different Simulink models are used that are adjusted for each comparison, for example, to include
sensor information. Some models use the reference position directly to move the robot, others contain
a controller that calculates the torque for a given position as input. In Appendix A.4, the Simulink
model is shown which forms the basis for the comparison. The plant model can be changed between
the Simscape Multibody and the Gazebo environment. Four sine-waves are used as input signals for
the four revolute joints and a step signal is used to move the gripper. These signals first go to the
controller where the corresponding torques are calculated. The Simulink model in the documents from
the OpenManipulator [17], provides a simple controller consisting of feedback PD controllers and a
feed-forward controller. The feedforward controller uses the Feedforward Controller Block from the
Robotic System Toolbox which calculates automatically the torques needed for each joint to track the
reference signal. This block uses the URDF model of the robot, where information about the inertia
and the masses of the links are defined. For the feedback controller, six PD controllers are used to
control the four arms separately and the two gripper links. The feedback data comes from one of the
simulation environments. Finally, visualisation plots are used to visualise the reference trajectory and
the behaviour of the robot. These models together with the model in Figure A.5 can be found in
the repository [43]. Chapter 3 further explains how the connection between Gazebo and Simulink was
established.
5
CHAPTER 3. CO-SIMULATION SIMULINK AND GAZEBO
6
CHAPTER 3. CO-SIMULATION SIMULINK AND GAZEBO
An advantage of this plugin is that it does not require the creation of a ROS network. Installing the
plugin and adding it to the world file is enough to control the robot via Simulink. No additional ROS
nodes with topics are needed because the plugin automatically sends the data to the Gazebo topics.
One limitation of this plugin is that it does not support code generation. This means that it is not
possible, for example, to create a ROS node directly from the existing Simulink model used for the
co-simulation.
Instead of using the Gazebo plugin, it is also possible to connect Simulink with a ROS network via the
ROS Toolbox [38]. Simulink can then publish or receive messages by subscribing to a ROS topic from
a ROS network that can be connected to Gazebo. With this method, the Simulink and Gazebo are
not synchronised. This means that Simulink can publish ROS topics at different rates compared to the
updating rate of Gazebo. Via this configuration, it is possible to generate C++ codes for stand-alone
ROS nodes that can be directly uploaded to the ROS environment.
To establish the connection with Simulink, the Gazebo Pacer block is used, see Figure 3.2a. This block
is from the Robotic System Toolbox library which can be found in the Gazebo co-simulation section of
the Simulink library. In this block the IP of the virtual machine and the port number are set, Figure
3.2b. Having the Gazebo simulation environment running the connection can be tested here. This
connection block is used for all Simulink files, only the host name needs to be changed when using a
Simulink file from the repository [43].
7
CHAPTER 3. CO-SIMULATION SIMULINK AND GAZEBO
This connection setup forms the basis for connecting Simulink with Gazebo, and it can be modi-
fied in order to use also velocity or torque as an input. In the repository [40], different examples
are located that show how different parts of the OpenManipulator can be controlled using position,
velocity and torque as an input signal.
8
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
9
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
A general workflow scheme that is used to set the manipulator scenario in Simscape Multibody and
Gazebo is shown in Figure 4.1, showing the main differences. First, the URDF of the robot as well
as the necessary STL files are needed for both simulation environments. For Simscape Multibody,
the URDF first needs to be converted to the Simulink environment which can be done via a built-in
function smimport. Next, the missing elements from this URDF that are ignored during the conversion
need to be added, including the STL references to the corresponding visual elements of the robot body.
In Section 4.5.1 the effects of these missing elements will be further elaborated. Finally, the robot
model can be directly implemented in the Simulink file which contains the reference signals and con-
trollers. Based on this workflow, it can be concluded that implementing a robot scenario in Simscape
Multibody does not require many steps or prior programming knowledge.
For Gazebo the workflow is different. First, a new package is made in the catkin workspace of the vir-
tual machine, after which the complete URDF of the robot can be added to the SDF/world file. Next,
the necessary plugins can be added to the world or launch file, for example, the co-simulation plugin
or ROS controller plugins if necessary. The catkin environment is then ready to build after which
the environment can be opened with a before-created launch file. For the scenario, a co-simulation
connection is used so that the controller is in the Simulink environment. Therefore, the last step is
to set up the co-simulation connection within a Simulink file using the blocks of the Robotic System
Toolbox as elaborated in Section 3.3. It can be concluded that for Gazebo basic knowledge is required
about ROS (the catkin workspace and launch files), Linux shell environment and URDF/SDF files
(XML language) which makes the setup relatively more complex compared to Simscape Multibody.
10
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
According to the official MathWorks documentation [16] and previous literature [25], Simscape Multibody
mainly focuses on the simulation of the mechanical and physical aspects of a robot. It provides limited
functionality in terms of built-in sensor modelling possibilities that can generate synthetic data. No
predefined virtual sensor models such as lidar, camera and IMU sensors were found that could be dir-
ectly implemented without having to make an extra connection to another environment. A tool that
requires an extra connection is the Simulink 3D Animation product. Via this tool, a 3D visualisation
of the robot model can be simulated from the Simscape Multibody model [1]. Different sensors can be
added to the scene such as a PointPickSensor. To use this tool together with Simscape Multibody, the
complete visual representation of the robot needs to be made in the 3D Animation product. Also, an
extra connection between Simscape Multibody blocks and this tool needs to be made in Simulink.
Using the Automated Driving Toolbox and Navigation Toolbox of Matlab, it is possible to model
virtual lidar sensors. However, using these directly in Simscape Multibody is not feasible because
this model is based on an Unreal Engine rendered environment. Co-simulation between Simulink and
Unreal Engine is possible via the Automated Driving Toolbox, allowing to receive data from virtual
sensors in Unreal Engine [15]. Physical models from Simscape Multibody can be implemented in Unreal
Engine while controlling it via Simulink. Nowadays, this toolbox is mainly used for vehicle simulations
but it is also possible for robotic applications. Furthermore, Matlab provides a Sensor Fusion and
Tracking Toolbox that includes real-world sensor models. Nonetheless, no official documentation or
example was found where these sensors are implemented in Simscape Multibody.
The working of this sensor is demonstrated on the manipulator scenario to measure the distance
between the specified world link and body link 5, see Figure 4.2 for the Simulink implementation. The
input and output of the blocks are connected to the world frame and link 5 respectively. An additional
output is enabled which sends the data during the simulation to the Matlab workspace. Between the
Transform Sensor and the To Workspace block, a PS-Simulink Converter is used to connect Simscape
physical network to Simulink blocks.
11
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
In Figure 4.3, the measured distance is plotted over the simulation time between the world frame
and link 5 of the manipulator scenario. This sensor directly gives the exact distance between frames
without noise, so there is no need to compensate for uncertainty.
Since, for example, a virtual lidar sensor can not be modelled directly within the Simscape Multibody
environment, the Transform Sensor block can be used to pretend/assume there is a sensor which
can measure the distance to an object. This can be seen in the example shown in Figure 4.4. The
OpenManipulator files from MathWorks [17] provide an example where the manipulator catches a ball
falling. In this model, the sensor is used to measure the distance from the world frame to the ball.
The measured distance from x, y and z is transferred to a state-flow chart where the reference position
of the robot is calculated accordingly. The sensor makes it convenient to test controller algorithms
without having to simulate complex synthetic data of virtual sensors since it directly gives the distance
between the frames.
12
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
(a) Built-in joint sensors implementation (b) Torque data plot of simulation
Figure 4.5: Joint sensors tested on scenario
• Mass of link
13
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
The mechanical section contains four built-in sensors. These sensors are ideal, they do not take any
inertia, friction, energy consumption and delays into account.
These 1D sensors can be connected to the 3D Simscape Multibody environment to a single degree
of freedom, for example, an actuator that operates in one direction. To make a connection between
the 1D and the 3D environment, the Simscape Multibody Multiphysics Library provides blocks that
establish this connection. This connection block makes use of these sensors. The library is also used
in the OpenManipulator example to model a translational hard stop of the gripper, see repository [46]
for the implementation. Within the 1D environment, friction and stiffness properties of the joints can
then be simulated in more detail.
➢ Within the Simscape Multibody model of the manipulator scenario the joint position, velocity,
acceleration, and torque can be measured directly from the joint blocks. This data, as well as
other sensor data, can be extracted and stored during the simulation.
➢ It is possible to measure mass and inertia properties from selected bodies of the manipulator
using the built-in inertia sensor block. Geometric properties, including the centre of mass and
inertia matrix, can be measured from the complete manipulator model or specified subsystems.
➢ Built-in ideal sensors from Simscape give the possibility to model electric actuators of the joints of
the manipulator scenario in more detail, making use of Simscape Electrical. 1D and 3D physical
models from Simscape can be combined to extend the dynamical properties of individual joints.
14
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
Gazebo is generally well known for its large variety of sensor modelling according to other research
on the comparison of simulation environments [36]. Based on other robotic scenarios in Gazebo, for
example [33], complex environments for mobile robots with virtual synthetic sensor data are simulated.
Within the Gazebo documentation, a variety of sensor classes are listed such as altimeter, camera,
contact, GPS, IMU, lidar and magnetometer sensors [5]. Pre-defined sensor models in XML format
can be added to the world file of the simulation environment together with the necessary plugins.
The XML code of the sensor model is added to the world file of the scenario so that the sensor is
simulated together with the OpenManipulator. Launching the world model gives the following results
which can be seen in Figure 4.6.
15
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
Next to the manipulator, a Coke Can object is placed taken from the standard Gazebo world models.
In the environment, blue lines can be seen which represent the laser signal of the lidar. The lidar laser is
blocked by the two obstacles in the Gazebo environment, showing no laser contours behind the objects.
To collect the data modelled by the sensor, a new Gazebo Read block was added to the Simscape
model in the same way as explained in Appendix A.3.2. The Gazebo topic of the Hokuyo sensor was
selected to receive data from the sensor. Based on the MathWorks example [20], bus elements and a
plot function were used to visualise the received data. In Figure 4.7, the data from the lidar is plotted
for a specific time step of the simulation. Figure 4.7a shows the plotted data without a noise filter
and Figure 4.7b shows the data after specifying a Gaussian filter in the XML code of the world file,
simulating noise on the data.
(a) Lidar scan without noise (b) Lidar scan with Gaussian filter
Figure 4.7: Lidar plots from data received in Simulink
16
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
During the simulation, the data from the IMU sensor is sent to Matlab via co-simulation. In Figure 4.9
the acceleration data measured in the Y-direction is plotted as an example. Likewise, the velocity and
position of the IMU sensor can be stored in the Matlab workspace.
17
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
The data from the camera is transferred to Matlab where it can be visualised, see Figure 4.10b. In the
Simulink model, it is possible to increase or decrease the sample time of the camera data that is sent
to Simulink. The RGB camera can also be mounted on top of the robot the same way as was done for
the IMU sensor, see Section 4.4.2. In Gazebo, the camera projection and the white lines move along
with the robot’s movement. In Figure 4.11 it can be seen how the camera is attached to the gripper
of the robot.
➢ A lidar sensor can be simulated within Gazebo, measuring the distance to obstacles within the
environment. This information could be used to approximate the location of an object next to
the manipulator.
➢ Internal measurement units, such as an IMU sensor, can be connected to a link of the manipulator.
The orientation, speed, and acceleration of the connecting link can be measured during the
simulation.
➢ Data from an RGB camera model can be visualised in Matlab during the simulation. The
camera can be attached to the robot so that the viewpoint of the camera changes accordingly.
For example, the synthetic data could be further used for algorithms that can recognise the
colours or shapes of objects. This allows the manipulator to distinguish between different colours
and objects within the environment.
➢ In the SDF file, many settings of the sensors can be set. It is possible to simulate sensor noise by
applying a Gaussian filter on the sensor output data before sending the information to Simulink.
More options such as weight, visual description, range and the sample rate of the sensor can be
modified or added.
18
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
19
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
• <transmission> Defining a relationship between the actuator and joint of the robot to model
gear ratios.
Comparing this list with the URDF model of the OpenManipulator, the following elements are not
taking into account by the conversion: <collision>, <limit>, <scale>, <friction> and <geometry>.
This means that the standard model of the robot in Simscape Multibody does not automatically take
collision modelling, joint limits and friction of the contact surfaces into account. This explains why
there was a difference between simulating joint limits between the environments while using the same
URDF of the robot.
20
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
In the following experiment, the collision was tested with the ground of the environment below the
OpenManipulator. For the Gazebo simulation, a standard ground plane was included in the SDF
21
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
file and for Simscape Multibody an extra plane was added (provided by the MathsWork example
[17]) to the model of the robot. As expected, Simscape Multibody does not automatically simulate
collision interaction with the environment, see Figure 4.16a. In Gazebo, interaction with the ground
was simulated as can be seen in Figure 4.16b. The blue balls indicate the collision contacts and the
green lines show an active force vector. It can be seen that the collision boundary is equal to the
geometry of the robot itself.
The following example shows how a sphere can be used as a contact boundary to simulate a colli-
sion between the OpenManipulator and the ground plane. In Simulink, a connection between the
ground floor and the spherical solids is made using the Spatial Contact Force block, see Figure 4.17a.
In this block, the contact stiffness, contact damping and transition region width are defined. Doing the
previous experiment again, it can be seen that the OpenManipulator cannot go through the ground
plane because the physical interaction between the spheres and the ground plane is simulated, see
Figure 4.17b.
22
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
During the collision in the example of the robot, Simscape Multibody makes use of a penalty method
[23]. The collision objects are simulated as a stiff spring with damping that is only enabled when
the bodies are in contact with each other. This allows the sphere to penetrate the ground for a
small amount. During this collision, the normal forces are computed according to the spring-damper
force law. The more the objects penetrate each other, the greater the normal force. Within the
Spatial Contact Force block, the normal force and friction force magnitude can be measured during
the simulation.
To understand how Gazebo simulates collisions, it is necessary to take a closer look at the physics
engine used by Gazebo. For the manipulator scenario, the Open Dynamics Engine (ODE) is used
as specified in the SDF model. A physics engine is computer software that calculates the dynamical
behaviour including body collision, friction and joint behaviour during the simulation. The ODE uses
"hard contacts" to simulate a collision between objects. This means that when these objects collide
with a given velocity, a non-penetration constraint is used [11]. It is not possible to penetrate the
surface and therefore the contact force does not vary over time. This means that the "true" contact
time is almost zero. The effect of the collision is simulated by giving the objects post-collision move-
ment by a momentum exchange. This method is commonly used for real-time physics engines, where
computation speed and robustness are key. Other physics engines make use of a spring contact (soft
contacts), where penetration is possible as with Simscape Multibody. These soft contacts can be used
to simulate real contact forces, but are computationally expensive and more prone to errors.
➢ The specified collision boundaries in the URDF of the OpenManipulator are not taken into
account in Simscape Multibody. The robot can move through itself and the ground plane without
any physical interaction. This is different in Gazebo, where the robot collides with itself and the
ground plane.
➢ Collisions in Simscape Multibody are mostly simulated by simple objects like spheres. These
spheres can be placed on objects that need physical interaction. Reaction forces are computed
during the collision by simulating the bodies as virtual springs.
➢ The ODE that is used for the scenario in Gazebo simulates collisions using the hard contacts
method, which is faster but not as accurate as soft contacts. The bodies are not simulated as
virtual springs resulting in the contact force being constant during the collision.
23
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
For the measurements with Simscape Multibody, a fixed-step continuous implicit solver is used, named
ode14x. This solver is chosen because it is a fixed-step solver and recommend for physical models
that are stiff. Accordingly, the step sizes for the simulations are changed in the solver settings. For
the Gazebo measurements, the standard ODE (Open Dynamics Engine) is used for the simulations.
The step sizes are changed in the pacer block which is located in the Simulink model as shown in
Section 3.3.2.
The same reference signals are used for both Simulink models, containing sine waves for each of the
four revolute joints and a step signal for the gripper. The stop time in Simulink was set to 5 seconds
for all the measurements. To minimise the influence of various factors during the time measurements,
the simulations are carried out on one computer with no programs running in the background. See
Appendix A.5 for the computer and virtual machine specifications. Each simulation is carried out 10
times from which the average, standard deviation and real-time factor (ratio between the time taken
for the simulation and the input duration) is calculated.
The simulation time is compared in two different ways. First, the simulation time is compared between
Simscape Multibody and Gazebo using a Simulink model with a position as a reference signal. The
sample time of the simulation is changed from 0.01 to 0.001 seconds. Secondly, the simulation time of
Gazebo is compared for different simulation step sizes together with an RGB camera sensor mounted
on the robot, shown in Section 4.4.3. The step size for this camera is varied from 0.1 to 0.01 seconds.
24
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
It can be seen that in general Simscape Multibody is considerably faster than Gazebo for both step
sizes. For a step size of 0.01 seconds, Simscape is around 28 times faster than Gazebo. Having a step
size of 0.001 seconds, Simscape is around 22 times faster. The simulation times of Simscape are below
5 seconds and therefore faster than real-time, in contrast to Gazebo which is around 2 and 10 times
slower than real-time. In addition, it can be seen that the sample standard deviation is higher for
Gazebo than for Simscape, which means that Gazebo shows more variation in the simulation time.
For both environments, the variation increased when lowering the simulation step size.
The results for the second comparison are shown in Table 4.3 whereupon the mean simulation time
and the standard deviation are visualised with a bar chart in Figure 4.19.
25
CHAPTER 4. COMPARISON SIMSCAPE VS GAZEBO
From these results, it can be seen that the simulation time increases when adding an RGB camera
sensor to the robot. Adding this sensor can increase the time by around 3 to 30 seconds depending on
the sample times. In addition, the sample standard deviation is higher for the simulations that include
a camera sensor. Comparing the results with a sample time of 0.001 without a camera and 0.001 with
a camera (0.01), the sample standard deviation increases by 571.5 per cent.
➢ The sample standard deviation of the Gazebo measurements is larger for all cases compared to
Simscape Multibody, making these simulations less consistent.
➢ The implementation of a camera sensor has a major effect on the simulation time when relatively
low sample size is chosen. This makes this co-simulation setup, given the simulation speed, not
the preferred method to simulate multiple sensors with a low sample time.
26
CHAPTER 5. PERCEPTION IN GAZEBO
5 Perception in Gazebo
In this chapter, based on a use case defined by a graduation project, sensor simulations in Gazebo
are further explored and applied. The chapter starts with a description of this use case, followed by
an explanation of the test setup in Gazebo. Next, it is shown how a 2D and 3D grid map can be
made based on sensor simulation in Gazebo. Finally, the 3D grid map is tested on a simulation of an
Image-guided Therapy robot from the use case. Files used for the perception in Gazebo can be found
in the repository [41].
For the setup, a depth camera is needed which is mounted at the top of the room, this ensures that a big
part of the room can be captured. The type of depth camera that is used for this setup is the Microsoft
Kinect sensor. The sensor consists of a 3D depth sensor and a normal RGB camera. Combining these
gives the possibility to measure the depth signals simultaneously with the RGB images. The depth
sensors use an IR laser projector together with an IR camera, giving the possibility to create a 3D map
with a resolution of 640 x 480 pixels at 30 Hz. The RGB camera captures images with a resolution of
640 x 480 pixels at 30 Hz but can be increased to 1280 x 1024 pixels running at 10 Hz. The Kinect
sensor is a commonly used sensor for obstacle avoidance as shown in research [13]. This research con-
cluded that the Kinect sensor is best used in indoor environments because there the IR absorption is
much lower than in outdoor environments. Moreover, the sensor has a limited range detection, between
0.5 and 6 meters. In the use case, the sensor is mounted in an indoor environment and does not need
a range larger than approximately 5 meters. Therefore, the Kinect sensor is adequate for the simulation.
27
CHAPTER 5. PERCEPTION IN GAZEBO
To simulate the Kinect sensor in Gazebo, the depth camera ROS plugin is used (included in the
Gazebo ROS package [6]). This plugin gives the possibility to simulate depth sensors and provides a
ROS interface, which allows for the publishing of the data from the Kinect sensor via ROS messages.
The Kinect sensor is added to the SDF file of the previously created room. To mount the sensor at
the top of the room, a fixed joint is made that is attached to the world frame and also to the link of
the camera sensor. The XML code of the camera model including the created joint can be found in
[41]. The setup in Gazebo with the four walls, boxes and Kinect camera can be seen in Figure 5.1.
(a) Side view of Gazebo room (b) Top view of Gazebo room
Figure 5.1: Gazebo test room with Kinect sensor and obstacles
The following camera ROS topics are available when starting the simulation and will be further used
in the next sections for 2D and 3D grid maps:
• camera/depth/colour/image_raw
• camera/depth/image_raw
• camera/depth/points
28
CHAPTER 5. PERCEPTION IN GAZEBO
29
CHAPTER 5. PERCEPTION IN GAZEBO
To quantify the probability that a voxel is occupied, the log odds are used. If a voxel is seen as
occupied after several scans, the log odds value of this voxel will be increased. If it exceeds a certain
value, the voxel is considered occupied and will be registered in the OctoMap. This also applies the
other way around when a voxel is not occupied. This probability representation of the environment
reduces the effects of sensor noise on the 3D map. Modifying the specified boundaries and probability
parameters, one can choose to increase the probability that a voxel is occupied. In this way, one
can choose to registry static as well as dynamic objects depending on the specified boundaries and
probability parameters. Given these characteristics, this OctoMap framework is further used for the
use case described in Section 5.1.
30
CHAPTER 5. PERCEPTION IN GAZEBO
The next step is to transfer the OctoMap data published to the ROS network on the virtual machine
to Matlab, which is in the Windows environment. This is done in the same way as for the 2D grid
map in Section 5.3, by connecting Matlab to the ROS network on the virtual machine. Subscribing to
the /OctoMap_full ROS topic allows receiving data published by the OctoMap server. Accordingly,
this data can be used directly with the readOccupancyMap3D function in Matlab to create the 3D
occupancy map. This map can then be plotted, as can be seen in Figure 5.5. The checkOccupancy
command can then be used to check if the specified coordinate (in meters) is unknown (-1), obstacle-
free (0) or occupied (1). The corresponding Matlab script used for this map generation can be found
in Appendix B.2.
The voxels that specify the ground plane in the OctoMap are not relevant for the use case and can
therefore be filtered from the map in order to reduce the data points. The OctoMap plugin provides
a built-in feature to filter the ground from the measured data. This filter is enabled and specified
together with a reference frame in the launch file of the octoserver.
31
CHAPTER 5. PERCEPTION IN GAZEBO
Because a detailed description of the objects in the room is not necessary for the use case, the resolution
is increased from 5 to 10 centimetres. The resolution has a large impact on the refresh time of the
OctoMap.
Combining all these actions resulted in an increase of the refresh rate from ±0.287 Hz to ±0.989 Hz of
the OctoMap topic. This means that dynamic objects are better incorporated into the 3D grid map.
In Appendix B.3, a ROS computation graph is shown that gives a complete overview containing all the
ROS nodes and topics running during the simulation. Explaining the data transformation from the
Gazebo simulation to the Matlab node in a schematic graph. This graph is made via the rqt_graph
package [28]. The launch file that is used to start the nodes can be found in Appendix B.4. The final
3D occupancy map with the Panda robot and the ground filter can be seen in Figure 5.6.
32
CHAPTER 5. PERCEPTION IN GAZEBO
As can be seen in Figure 5.7b, a big part of the robot was sometimes visible in the OctoMap or
blocking the view of the sensor. Since the robot was constantly moving, the sensor was only blocked
for a small time moment. Therefore, the box was always visible in the grid map although a big part
of the robot was captured as well.
33
CHAPTER 5. PERCEPTION IN GAZEBO
Another sensor configuration was tested by attaching the Kinect sensor to the robot itself. It was
chosen to mount the sensor on the carrier link which slides over the rails. If the camera would be
attached to another link of the robot, the viewpoint of the camera would change too much, making
it difficult to capture a part of the box. The robot was moved along the rails to test how the box
was captured in the OctoMap, see Figure 5.8a. In Figure 5.8b, it can be seen that only a part of the
object remains visible while the robot is moving. Only the visible part of the sensor will be stored
in the OctoMap. This partially has to do with the custom settings described in Section 5.4.3, which
ensures that the OctoMap is updated faster to also capture dynamic objects. An advantage of this
configuration is that the robot is less visible, but this depends on the trajectory of the robot. It was also
noticed that due to the changing camera viewpoint, the quality of the OctoMap decreased compared
to the stationary camera viewpoint. As a result, sometimes voxels were shown to be occupied when
they were not.
34
CHAPTER 6. CONCLUSION AND RECOMMENDATION
All in all, the comparison gives a general idea of what is possible within the environments applied
to the same scenario. It is important to note that the examples in this study only give a partial indic-
ation of what is possible in the simulation environments. The examples mainly represent general or
commonly used possibilities. Based on these findings, it is recommended to use Simscape Multibody
when the focus of the scenario is on the mechanical part of the robot. It provides a lot of built-in sensor
blocks and additional libraries to measure and modify the dynamic properties of the robot. Built-in
sensor blocks allow for directly test algorithms without needing to simulate complex virtual sensors.
Sensor simulation could be further extended by, for example, integrating with Unreal Engine or Sim-
ulink 3D animation. Collisions between bodies are simulated as virtual springs, consisting of simple
geometries that allow more accuracy than hard contacts. The Simulink blocks make it convenient
to adjust the dynamic properties of the robot. No extra knowledge of URDF or other programming
languages except Matlab and Simulink is required.
Gazebo is preferred to be used when a more complex environment needs to be simulated, such as
rooms with dynamic obstacles. In addition, Gazebo supports a wide variety of sensor models that
can generate synthetic data. Due to the ROS connection, algorithms can be directly tested on real
hardware. By making use of the default ODE physics engine, hard contact method is used which is
more robust and faster compared to soft contacts. More complex collision geometries can be simulated,
such as the complete mesh of the robot but this does come at the expense of accuracy compared to
soft contacts.
35
CHAPTER 6. CONCLUSION AND RECOMMENDATION
Apart from Gazebo, there are more external simulation environments that enable co-simulation with
Simulink, for example, V-Rep, Unreal Engine and Unity. Therefore, a suggestion for future work would
be to connect these environments to Simulink and compare them in terms of, for example, usability,
utility and performance. This would further expand the understanding of simulation possibilities when
co-simulating with Simulink.
In the second part of this study, sensor simulation in Gazebo is further explored based on a use
case perspective. The second main objective of the study was as follows:
2. Investigate how to create a grid map of sensor simulation in Gazebo based on the
requirements for the use case.
Two ways of creating a grid map of sensor simulation in Gazebo have been investigated, in 2D and
3D. In addition, the IGT robot is further implemented in Gazebo including the Kinect sensor, so that
it can be further used by the master student. An advantage of the 2D grid map is the fact that it
updates fast to Matlab (around ±0.23 seconds), making it possible to include dynamic obstacles in
the map. However, since it is important to measure the distance between different body links of the
robot and the objects, a 3D grid map would be preferable for the use case. The OctoMap gives the
possibility to directly process the data in Matlab so that it can be further used for controller input.
The refreshing time of the OctoMap has been decreased from 4.0 seconds to around 1.1 seconds. This
makes it still relatively slower than the 2D grid map, but fast enough to capture moving objects in the
room. For the use case, it is preferred to attach the Kinect sensor to the ceiling. Due to the relative
complex movements of the robot, it is difficult to capture objects in the room without a lot of noise in
the OctoMap.
Finding the best sensor configuration requires more research. Since the IGT robot is relatively large,
it is difficult to capture all objects in a room without blocking the sensor’s view. For future work,
the 3D representation of the room could be more detailed by combining point clouds from multiple
Kinect sensors. This would also make the 3D map more detailed, allowing to capture objects from
multiple viewpoint angles. Another suggestion for future work is to filter the robot’s geometry from
the map. Since the position of the robot is already known, it is not necessary to capture the robot in
the OctoMap. Suggestions for ROS packages can be found in Section 5.5.2.
36
BIBLIOGRAPHY
Bibliography
[1] Ahmed R.J. Almusawi, L. Canan Dülger and Sadettin Kapucu. “Robotic arm dynamic and sim-
ulation with Virtual Reality Model (VRM)”. In: International Conference on Control, Decision
and Information Technologies, CoDIT 2016 (Oct. 2016), pp. 335–340. doi: 10.1109/CODIT.
2016.7593584.
[2] Heesun Choi et al. “On the use of simulation in robotics: Opportunities, challenges, and sugges-
tions for moving forward”. In: Perspective 118.1 (Sept. 2020). doi: 10.1073/pnas.1907856118/-
/DCSupplemental.
[3] Mirella Santos Pessoa De Melo et al. “Analysis and comparison of robotics 3D simulators”. In:
Proceedings - 2019 21st Symposium on Virtual and Augmented Reality, SVR 2019. Institute of
Electrical and Electronics Engineers Inc., Oct. 2019, pp. 242–251. isbn: 9781728154343. doi:
10.1109/SVR.2019.00049.
[4] Tomas Petricek Eitan Marder-Eppstein. robot_body_filter - ROS Wiki. url: http://wiki.ros.
org/robot_body_filter.
[5] Gazebo API Reference. Gazebo: Sensors. url: https://osrf-distributions.s3.amazonaws.
com/gazebo/api/dev/group__gazebo__sensors.html.
[6] gazebo_ros_pkgs - ROS Wiki. url: http://wiki.ros.org/gazebo_ros_pkgs.
[7] GitHub - ROBOTIS-GIT/open_manipulator: OpenManipulator for controlling in Gazebo and
Moveit with ROS. url: https://github.com/ROBOTIS-GIT/open_manipulator.
[8] Victor I C Hofstede, Bachelor Opleiding and Kunstmatige Intelligentie. The importance and
purpose of simulation in robotics. Tech. rep. Amsterdam: University of Amsterdam, June 2015.
[9] Armin Hornung et al. “OctoMap: An efficient probabilistic 3D mapping framework based on
octrees”. In: Autonomous Robots 34.3 (Feb. 2013), pp. 189–206. issn: 09295593. doi: 10.1007/
S10514-012-9321-0.
[10] Hyejong Kim. open_manipulator - ROS Wiki. url: http://wiki.ros.org/open_manipulator.
[11] Ehsan Izadi and Adam Bezuijen. “Simulating direct shear tests with the Bullet physics library:
A validation study”. In: PLoS ONE 13.4 (Apr. 2018). issn: 19326203. doi: 10.1371/journal.
pone.0195073.
[12] Nathan Koenig and Andrew Howard. “Design and use paradigms for Gazebo, an open-source
multi-robot simulator”. In: 2004 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS) 3 (2004), pp. 2149–2154. doi: 10.1109/IROS.2004.1389727.
[13] Jizhan Liu et al. “Experiments and analysis of close-shot identification of on-branch citrus fruit
with realsense”. In: Sensors 18.5 (May 2018). issn: 14248220. doi: 10.3390/s18051510.
[14] Martin Pecka. sensor_filters - ROS Wiki. url: http://wiki.ros.org/sensor_filters.
[15] MathWorks. Connect a Physical Model in Simulink to Unreal Engine | AUV Deep Dive, Part 6
- YouTube. May 2021. url: https://www.youtube.com/watch?v=fNd0fVYxkGg.
[16] MathWorks. Simscape Multibody - MATLAB & Simulink. url: https://nl.mathworks.com/
products/simscape-multibody.html.
[17] MathWorks Student Competitions Team. Designing Robot Manipulator Algorithms - File Ex-
change - MATLAB Central. Oct. 2019. url: https : / / nl . mathworks . com / matlabcentral /
fileexchange/65316-designing-robot-manipulator-algorithms.
37
BIBLIOGRAPHY
[18] MathWorks Support. Gazebo Simulation for Robotics System Toolbox - MATLAB & Simulink
- MathWorks Benelux. url: https : / / nl . mathworks . com / help / robotics / ug / gazebo -
simulation-for-robotics-system.html.
[19] MathWorks Support. How Gazebo Simulation for Robotics System Toolbox Works - MATLAB &
Simulink - MathWorks Benelux. url: https://nl.mathworks.com/help/robotics/ug/how-
gazebo-simulation-for-robotics-works.html.
[20] MathWorks Support. Perform Co-Simulation between Simulink and Gazebo - MATLAB & Sim-
ulink - MathWorks Benelux. url: https://nl.mathworks.com/help/robotics/ug/perform-
co-simulation-between-simulink-and-gazebo.html.
[21] MathWorks Support. ROS 2 Dashing and Gazebo - MATLAB & Simulink. url: https://nl.
mathworks . com / support / product / robotics / ros2 - vm - installation - instructions - v5 .
html.
[22] MathWorks Support. URDF Import - MATLAB & Simulink - MathWorks Benelux. url: https:
//nl.mathworks.com/help/physmod/sm/ug/urdf-import.html#bvmwhdm-1.
[23] Mathworks Support. Modeling Contact Force Between Two Solids - MATLAB & Simulink -
MathWorks Benelux. url: https : / / nl . mathworks . com / help / physmod / sm / ug / modeling -
contact-force-between-two-solids.html.
[24] MoveIt. Mesh Filter with UR5 and Kinect — moveit_tutorials Noetic documentation. url:
https://ros- planning.github.io/moveit_tutorials/doc/mesh_filter/mesh_filter_
tutorial.html.
[25] Dragomir N. Nenchev, Atsushi Konno and Teppei Tsujita. “Simulation”. In: Humanoid Robots.
Elsevier, 2019, pp. 421–471. isbn: 9780128045602. doi: 10.1016/B978-0-12-804560-2.00015-8.
url: https://linkinghub.elsevier.com/retrieve/pii/B9780128045602000158.
[26] OpenMANIPULATOR-X. url: https://emanual.robotis.com/docs/en/platform/openmanipulator_
x/overview/.
[27] pcl_ros - ROS Wiki. url: https://wiki.ros.org/pcl_ros.
[28] rqt_common_plugins - ROS Wiki. url: https://wiki.ros.org/rqt_common_plugins.
[29] U Schmucker et al. Contact processing in the simulation of the multi-body systems. Tech. rep.
Magdeburg, Germany: University of Magdeburg, Institute for Electrical Energy Systems, Dec.
2008. url: https://www.researchgate.net/publication/228699941.
[30] Sebastian Castro. Walking Robot Modeling and Simulation » Student Lounge - MATLAB &
Simulink. Dec. 2019. url: https://blogs.mathworks.com/student- lounge/2019/12/20/
walking-robot-modeling-and-simulation/.
[31] Stanford Artificial Intelligence Laboratory et al. Robotic Operating System, ROS Melodic Morenia.
May 2018. url: https://www.ros.org/.
[32] Aaron Staranowicz and Gian Luca Mariottini. “A Survey and Comparison of Commercial and
Open-Source Robotic Simulator Software”. In: Proceedings of the 4th International Conference
on PErvasive Technologies Related to Assistive Environments - PETRA ’11 (May 2011). doi:
10.1145/2141622.
[33] Kenta Takaya et al. “Simulation environment for mobile robots testing using ROS and Gazebo”.
In: 2016 20th International Conference on System Theory, Control and Computing, ICSTCC 2016
- Joint Conference of SINTES 20, SACCS 16, SIMSIS 20 - Proceedings (Dec. 2016), pp. 96–101.
doi: 10.1109/ICSTCC.2016.7790647.
[34] tf - ROS Wiki. url: http://wiki.ros.org/tf.
[35] topic_tools - ROS Wiki. url: http://wiki.ros.org/topic_tools.
38
BIBLIOGRAPHY
[36] M. Torres-Torriti, T. Arredondo and P. Castillo-Pizarro. “Survey and comparative study of free
simulation software for mobile robots”. In: Robotica 34.4 (Apr. 2016), pp. 791–822. issn: 14698668.
doi: 10.1017/S0263574714001866.
[37] Simon Vanneste, Ben Bellekens and Maarten Weyn. 3DVFH+: Real-Time Three-Dimensional
Obstacle Avoidance Using an Octomap. Tech. rep. Antweropen: CoSys-Lab, Faculty of Applied
Engineering, 2014.
[38] Andres Vivas and Jose Maria Sabater. “UR5 Robot Manipulation using Matlab/Simulink and
ROS”. In: 2021 IEEE International Conference on Mechatronics and Automation, ICMA 2021
(Aug. 2021), pp. 338–343. doi: 10.1109/ICMA52036.2021.9512650.
[39] Maida Cohodar Nedzma Kobilica Vjekoslav Damic. “Development of Dynamic Model of Robot
with Parallel Structure Based on 3D CAD Model, Proceedings of the 30th DAAAM International
Symposium”. In: Published by DAAAM International (2019), pp. 155–0160. issn: 1726-9679. doi:
10.2507/30th.daaam.proceedings.020.
[40] Guido Wolfs. Co-Simluation Simulink and Gazebo Examples · main · ETProjects / GW CompSi-
mEnv · GitLab. 2022. url: https://gitlab.tue.nl/et_projects/gw-compsimenv/-/tree/
main/Co-Simluation%20Simulink%20and%20Gazebo%20Examples.
[41] Guido Wolfs. Gazebo Perception Files · main · ETProjects / GW CompSimEnv · GitLab. 2022.
url: https : / / gitlab . tue . nl / et _ projects / gw - compsimenv/ - /tree / main / Gazebo %
20Perception%20Files.
[42] Guido Wolfs. Scenario OpenManipulator Files · main · ETProjects / GW CompSimEnv · GitLab.
2022. url: https://gitlab.tue.nl/et_projects/gw-compsimenv/-/tree/main/Scenario%
20OpenManipulator%20Files.
[43] Guido Wolfs. Simulink Models · main · ETProjects / GW CompSimEnv · GitLab. 2022. url:
https://gitlab.tue.nl/et_projects/gw-compsimenv/-/tree/main/Simulink%20Models.
[44] Guido Wolfs. Simulink Models/Collision Comparison · main · ETProjects / GW CompSimEnv
· GitLab. 2022. url: https://gitlab.tue.nl/et_projects/gw- compsimenv/- /tree/main/
Simulink%20Models/Collision%20Comparison.
[45] Guido Wolfs. Simulink Models/Joint-limit Comparison · main · ETProjects / GW CompSimEnv
· GitLab. 2022. url: https://gitlab.tue.nl/et_projects/gw- compsimenv/- /tree/main/
Simulink%20Models/Joint-limit%20Comparison.
[46] Guido Wolfs. Simulink Models/Sensor Comparison · main · ETProjects / GW CompSimEnv ·
GitLab. 2022. url: https://gitlab.tue.nl/et_projects/gw- compsimenv/- /tree/main/
Simulink%20Models/Sensor%20Comparison.
[47] Guido Wolfs. Simulink Models/Time Comparison · main · ETProjects / GW CompSimEnv ·
GitLab. 2022. url: https://gitlab.tue.nl/et_projects/gw- compsimenv/- /tree/main/
Simulink%20Models/Time%20Comparison.
[48] Guido Wolfs. Virtual Machine · main · ETProjects / GW CompSimEnv · GitLab. 2022. url:
https://gitlab.tue.nl/et_projects/gw-compsimenv/-/tree/main/Virtual%20Machine.
39
APPENDIX A. CO-SIMULATION AND COMPARISON
• ROS 2 Dashing.
A-1
APPENDIX A. CO-SIMULATION AND COMPARISON
This function directly changes the position of the first joint of the manipulator from 0 to pi/4. For
this command, neither a connection with Simulink nor a feedback controller is needed. Other types of
commands can be found in the the MathWorks support documentation, see [18].
The Gazebo Apply Command block on the right sends the data to the plugin in the Linux environment.
This block receives data collected by the bus assignment, which consists of the following elements:
• Gazebo Blank Message block: Creates a blank Gazebo message or command. For the
manipulator scenario, "SetJointPostion", "SetJointVelocity", "ApplyJointTorque" are used since
these are needed to control the revolute and prismatic joints.
• Gazebo Select Entity bock: A joint or link from the Gazebo model can be selected to be
actuated. These topics are Gazebo topics and can be directly found in Simulink. There is no
ROS node required to transfer the data, this is all done by the plugin itself. The type of topics
depends on the possibilities of the robot in the Gazebo environment. For the OpenManipulator
the following options can be chosen, see Figure A.3. Only the joints 1 up to 4 and the gripper
topics are used for the scenario.
A-2
APPENDIX A. CO-SIMULATION AND COMPARISON
• Constant block with "uint32(0)": Indicates which axes will be actuated. Since the OpenMa-
nipulator has only 1 degree of freedom joints, this value will be set to "uint32(0)". This indicates
that the first axes are actuated.
• Constant block with "0" and "1e7" duration: With these blocks, the duration of the applied
input (in this case position) is defined. Two separate blocks are used to increase precision. For
the OpenManipulator, the duration of the input signal is chosen to be equal to the sample time
which is specified as "0.01" seconds. Therefore the duration was set to 1e7 nanoseconds. When
increasing the sample time of the mode, also the duration of the reference input can be increased
accordingly as was done for the simulation time comparison in Section 4.6.
• Position: The reference position data that the joints need to follow. This element specifies in
this case the chosen position of the OpenManipulator. This port can also be changed to velocity
and torque if another type of reference input is required. For example, when changing the type
of reference from position to torque, it is also required to change the Gazebo message block.
A-3
APPENDIX A. CO-SIMULATION AND COMPARISON
A-4
APPENDIX A. CO-SIMULATION AND COMPARISON
A-5
APPENDIX B. GAZEBO PERCEPTION
B Gazebo Perception
In this appendix, RViz visualisations are included to demonstrate the coordinate transformation from
the sensor frame. In addition, Matlab codes for the grid maps, a ROS computation graph and the
launch file that is created for the sensor simulation in Gazebo are included.
B-1
APPENDIX B. GAZEBO PERCEPTION
B-2
APPENDIX B. GAZEBO PERCEPTION
B-3
APPENDIX B. GAZEBO PERCEPTION
40 name="spawn_model"
41 pkg="gazebo_ros"
42 type="spawn_model"
43 args="-urdf -param robot_description -model panda_arm"
44 output="screen" />
45
46 </launch>
Listing B.4: XML code - Launch file for OctoMap including specified parameters
1 <launch>
2 <node pkg="octomap_server" type="octomap_server_node" name="octomap_server">
3 <param name="resolution" value="0.1" />
4
5 <!-- fixed map frame (set to ’map’ if SLAM or localization running!) -->
6 <param name="frame_id" type="string" value="camera_link2" />
7 <param name="base_frame_id" type="string" value="camera_link2" />
8
9 <!-- maximum range to integrate (speedup!) -->
10 <param name="sensor_model/max_range" value="5.0" />
11
12 <!-- data source to integrate (PointCloud2) -->
13 <remap from="cloud_in" to="/voxel_grid/output" />
14
15 <!-- Set parameters -->
16 <param name="sensor_model/hit" value="1" />
17 <param name="sensor_model/miss" value="0.01" />
18 <param name="sensor_model/min" value="0.49" />
19 <param name="sensor_model/max" value="0.5" />
20
21 <param name="filter_ground" type="bool" value="true" />
22 </node>
23 </launch>
B-4
APPENDIX C. GAZEBO MODELS
C Gazebo Models
In this appendix, the XML codes that are used for the Gazebo models are included. These codes can
be added directly to an SDF file for a Gazebo simulation. The complete SDF files containing these
sensor models and the URDF for the OpenManipulator can be found in the repository [42].
C-1
APPENDIX C. GAZEBO MODELS
40 <visualize>true</visualize>
41 </sensor>
42 </link>
43 </model>-
C-2
APPENDIX C. GAZEBO MODELS
53 <mean>0.0</mean>
54 <stddev>1.7e-2</stddev>
55 <bias_mean>0.1</bias_mean>
56 <bias_stddev>0.001</bias_stddev>
57 </noise>
58 </x>
59 <y>
60 <noise type="gaussian">
61 <mean>0.0</mean>
62 <stddev>1.7e-2</stddev>
63 <bias_mean>0.1</bias_mean>
64 <bias_stddev>0.001</bias_stddev>
65 </noise>
66 </y>
67 <z>
68 <noise type="gaussian">
69 <mean>0.0</mean>
70 <stddev>1.7e-2</stddev>
71 <bias_mean>0.1</bias_mean>
72 <bias_stddev>0.001</bias_stddev>
73 </noise>
74 </z>
75 </linear_acceleration>
76 </imu>
77 <always_on>1</always_on>
78 <update_rate>200</update_rate>
79 </sensor>
80 </link>
81
82 <!-- Sensor joint -->
83 <joint name=’imu_sensor_joint’ type=’fixed’>
84 <pose>0.16 0 0.2045 0 -0 0</pose>
85 <parent>link5</parent>
86 <child>imu_sensor_link</child>
87 </joint>
C-3
APPENDIX C. GAZEBO MODELS
22 <box>
23 <size>0.1 0.1 0.1</size>
24 </box>
25 </geometry>
26 </visual>
27 <sensor name="camera" type="camera">
28 <camera>
29 <horizontal_fov>1.047</horizontal_fov>
30 
34 <clip>
35 <near>0.1</near>
36 <far>100</far>
37 </clip>
38 </camera>
39 <always_on>1</always_on>
40 <update_rate>200</update_rate>
41 <visualize>true</visualize>
42 </sensor>
43 </link>
44 </model>
C-4
APPENDIX C. GAZEBO MODELS
Listing C.6: XML code - Kinect sensor with fixed joint attachment
1 <!-- Kinect Sensor Model -->
2 <model name=’depth_camera’>
3 <pose frame=’’>0.5 0 2.5 -1.57079 1.57079 3.14159</pose>
4 <link name=’camera_sensor_link’>
5 <inertial>
6 <mass>0.1</mass>
7 <inertia>
8 <ixx>0.000166667</ixx>
9 <iyy>0.000166667</iyy>
10 <izz>0.000166667</izz>
11 <ixy>0</ixy>
12 <ixz>0</ixz>
13 <iyz>0</iyz>
14 </inertia>
15 <pose frame=’’>0 0 0 0 -0 0</pose>
16 </inertial>
17 <collision name=’collision’>
18 <geometry>
19 <box>
20 <size>0.073 0.276 0.072</size>
21 </box>
22 </geometry>
23 <max_contacts>10</max_contacts>
24 <surface>
25 <contact>
26 <ode/>
27 </contact>
28 <bounce/>
29 <friction>
30 <torsional>
31 <ode/>
32 </torsional>
33 <ode/>
34 </friction>
35 </surface>
36 </collision>
37 <visual name=’visual’>
38 <geometry>
39 <mesh>
40 <uri>model://kinect/meshes/kinect.dae</uri>
41 </mesh>
42 </geometry>
43 </visual>
44 <sensor name=’RGB_camera1’ type=’depth’>
45 <update_rate>20</update_rate>
46 <camera name=’__default__’>
47 <horizontal_fov>1.0472</horizontal_fov>
48 
53 <clip>
54 <near>0.05</near>
55 <far>3</far>
56 </clip>
57 </camera>
58 <plugin name=’camera_plugin’ filename=’libgazebo_ros_openni_kinect.so’>
59 <baseline>0.2</baseline>
C-5
APPENDIX C. GAZEBO MODELS
60 <alwaysOn>1</alwaysOn>
61 <updateRate>0.0</updateRate>
62 <cameraName>camera_ir</cameraName>
63 <imageTopicName>/camera/color/image_raw</imageTopicName>
64 <cameraInfoTopicName>/camera/color/camera_info</cameraInfoTopicName>
65 <depthImageTopicName>/camera/depth/image_raw</depthImageTopicName>
66 <depthImageCameraInfoTopicName>/camera/depth/camera_info</depthImageCameraInfoTopicName>
67 <pointCloudTopicName>/camera/depth/points</pointCloudTopicName>
68 <frameName>camera_link</frameName>
69 <pointCloudCutoff>0.5</pointCloudCutoff>
70 <pointCloudCutoffMax>3.0</pointCloudCutoffMax>
71 <distortionK1>0</distortionK1>
72 <distortionK2>0</distortionK2>
73 <distortionK3>0</distortionK3>
74 <distortionT1>0</distortionT1>
75 <distortionT2>0</distortionT2>
76 <CxPrime>0</CxPrime>
77 <Cx>0</Cx>
78 <Cy>0</Cy>
79 <focalLength>0</focalLength>
80 <hackBaseline>0</hackBaseline>
81 </plugin>
82 </sensor>
83 <self_collide>0</self_collide>
84 <enable_wind>0</enable_wind>
85 <kinematic>0</kinematic>
86 </link>
87 <joint name=’RGBd_joint’ type=’fixed’>
88 <pose frame=’’>2 2 2 0 -0 0</pose>
89 <parent>world</parent>
90 <child>camera_sensor_link</child>
91 </joint>
92 </model>
C-6