Final Capstone Project Report
Final Capstone Project Report
Report
INTRODUCTION 2
LITERATURE REVIEW 4
THEORETICAL BACKGROUND 5
SYSTEM DESIGN 23
IMPLEMENTION : 25
Tables:
Table 1 SLAM Algorithm Kinds _____________________________________________________________________________ 12
Table 2 Visual Slam kind __________________________________________________________________________________ 17
Table 3 comparison of some ROS 2 distributions _______________________________________________________________ 26
Table 4 Robotics simulation software _______________________________________________________________________ 37
Figures:
Figure 1: Mobile robot Festo Robotino ________________________________________________________________________ 6
Figure 2 Top and front view for Robotino______________________________________________________________________ 7
Figure 3 Decomposition of wheel-pitch circumferential velocities. __________________________________________________ 8
Figure 4: SLAM Processing Flow,REF(25) ______________________________________________________________________ 9
Figure 5 System Block Diagram ____________________________________________________________________________ 24
Figure 6 Flow chart ______________________________________________________________________________________ 25
Figure 7 downloaded VirtualBox package ____________________________________________________________________ 30
Figure 8 upload ubuntu __________________________________________________________________________________ 31
Figure 9 create VirtualBox ,start new VirtualBox _______________________________________________________________ 31
Figure 10 create VirtualBox, Add Type ______________________________________________________________________ 31
Figure 11 add Ubuntu in VirtualBox _________________________________________________________________________ 32
Figure 12 Ubuntu within VirtualBox _________________________________________________________________________ 32
Figure 13 Visual Orb slam mapping result ____________________________________________________________________ 52
Figure 14 Fast slam mapping result _________________________________________________________________________ 52
Figure 15 Graph slam mapping result _______________________________________________________________________ 53
Figure 16 EKF slam resuilt_________________________________________________________________________________ 53
Equations:
Equation 1______________________________________________________________________________________________ 9
Equation 3:sum of the probability of all possible values of the random variable must sum up to 1 ________________________ 10
Equation 4 sum of probabilities for all possible values __________________________________________________________ 11
Equation 5 sum of probabilities for all possible values __________________________________________________________ 11
Equation 6 sum of probabilities for all possible values __________________________________________________________ 11
Equation 7 Predict the state at time _________________________________________________________________________ 13
Equation 8 Linearize the system dynamics around the predicted state ______________________________________________ 13
Equation 9 Predict the covariance of the state ________________________________________________________________ 13
Equation 10 Update the state estimate based on the measurement _______________________________________________ 14
Equation 11 Compute the Kalman gain ______________________________________________________________________ 14
Equation 12 Update the state estimate ______________________________________________________________________ 14
Equation 13 Update the covariance of the state _____________________________________________________________ 14
Equation 14 mean landmark error __________________________________________________________________________ 21
Equation 15 Distance Error ________________________________________________________________________________ 21
Equation 16 Angle Error __________________________________________________________________________________ 22
1
Introduction
The problem addressed in this capstone project is to compare the accuracy of mapping and localization
results achieved by four different Simultaneous Localization and Mapping (SLAM) methods implemented
on a Robotino autonomous robot. The goal is to evaluate the performance of these SLAM techniques and
identify which method produces the most accurate maps and localization estimates for the Robotino.
The accurate mapping of the environment and precise localization of the Robotino are crucial for its
effective operation as an autonomous robot. By conducting a comparative analysis of the four SLAM
methods, this project aims to determine which technique provides the highest level of accuracy in generating
maps and localizing the robot within the environment.
The importance of this project lies in its potential to improve the overall performance and reliability of the
Robotino's navigation and localization capabilities. By identifying the most accurate SLAM method, the
project outcomes can contribute to enhancing the efficiency and effectiveness of the Robotino in performing
its tasks, such as autonomous package delivery.
Furthermore, the project's findings and insights can have broader implications for the field of autonomous
robotics. The evaluation and comparison of different SLAM techniques can provide valuable knowledge and
guidance for researchers and practitioners working with autonomous robots in diverse applications. This
project can contribute to advancing the understanding of SLAM methods and their suitability for different
robotic systems beyond the Robotino, fostering innovation and improvement in the field.
Project Description
The objective of this project is to compare the accuracy of mapping and localization results achieved by four
different Simultaneous Localization and Mapping (SLAM) methods implemented on a Robotino
autonomous robot. SLAM is a vital technique in robotics that enables a robot to simultaneously create a map
of its environment and determine its own position within that map.
The project will involve implementing and integrating four distinct SLAM algorithms onto the Robotino
using many platforms and software. Each algorithm will be responsible for mapping the environment and
estimating the robot's position based on sensor data, such as laser scans and odometry readings. The four
SLAM methods chosen for evaluation will be selected based on their popularity and performance in the field
of robotics.
To conduct the comparison, a series of experiments will be designed and executed in various environments.
The Robotino will navigate through these environments, and the four SLAM methods will generate maps
and estimate the robot's position. The accuracy of the maps and localization results will be evaluated by
comparing them against ground truth data obtained from external sources, such as manually created maps or
motion capture systems.
Data collected during the experiments will be analyzed to assess the accuracy of each SLAM method in
terms of map quality and localization precision. Various metrics, such as map consistency, feature
alignment, and localization error, will be used to quantify and compare the performance of the different
algorithms. The results will be statistically analyzed to determine significant differences in accuracy
between the SLAM methods.
The project aims to provide insights into which SLAM algorithm performs best in terms of accuracy and
reliability on the Robotino platform. The findings will help identify the most suitable SLAM method for
mapping and localization tasks in similar autonomous robot applications. Additionally, the project will
contribute to the broader field of robotics by adding to the knowledge base of SLAM algorithms and their
performance characteristics, aiding researchers and practitioners in making informed decisions regarding
SLAM implementation.
2
Design Specifications
The specifications for this project are comprehensive and cover a wide range of aspects that are critical for
the development and successful deployment of an autonomous delivery robot. These specifications are
designed to ensure that the final product meets all the requirements and performance criteria and can
perform its mission effectively and efficiently. The specifications cover the following items:
Hardware:
The project will utilize the Robotino platform within the Webots simulation softwares. The Robotino robot
will be equipped with a variety of sensors such as a Distance sensor, position sensor ,Gyro sensor , cameras,
and IMU to perceive its environment and enable navigation. The robot will also have the capability to
establish a network connection within the simulation for communication and localization information.
Software:
The project will implement SLAM algorithms within the Webots environment to enable the robot to create a
map of the simulated environment and accurately determine its position. Additionally, obstacle detection
and avoidance algorithms will be integrated into the robot's control system to ensure safe loclization.
The software design specifications for this project involve implementing four SLAM algorithms within the
Webots environment: EKF SLAM using the 2D SLAM demo from MRPT and graph-based SLAM using
ROS2 and Webots. These algorithms enable the robot to create accurate maps and determine its position.
Path planning algorithms generate collision-free paths, while obstacle detection and avoidance algorithms
ensure safe navigation. Visualization tools like RViz, Webots, and ROS provide real-time feedback. The
objective is to enhance mapping, localization, path planning, and obstacle avoidance capabilities for the
robot's navigation within Webots.
Networking:
The simulation will allow the robot to establish a network connection for communication and localization
purposes. The robot will receive real-time updates on its mission and location through this network
connection. However, the project will also focus on ensuring the robot's ability to operate robustly and
autonomously in the event of a network disconnection or unstable connection. The SLAM algorithm will
continue to function using previously acquired map data, allowing the robot to maintain localization and
navigate to the target location even without a stable network connection.
Safety:
The robot's behavior will be designed to prioritize safety. It should include collision detection and avoidance
mechanisms to mitigate risks to people, property, and the environment. Safety mechanisms will be
thoroughly tested and integrated into the robot's control system to ensure safe operation within the simulated
environment. Safety is ensured by the robot's ability to avoid obstacles.
Human-Robot Interaction:
The project will incorporate a user interface within the Webots simulation software to enable remote control
and monitoring of the robot's status and mission progress. The user interface will be intuitive and user-
3
friendly, providing clear instructions for operation and facilitating interaction with the robot within the
simulated environment.
System Assumptions
This project is based on important system assumptions. The first assumption is that the robot will be
operating in a known and static environment, allowing it to use its sensors and localization capabilities
effectively. The functionality of the robot's sensors and actuators is also assumed to be as expected,
allowing it to gather information about the environment and localization accordingly.
Additionally, the algorithm for obstacle detection and avoidance is assumed to be accurate and reliable.
There are no assumptions of malicious interference, and the robot is expected to follow planned paths, detect
the environment, and build a map for it. The testing environment is assumed to accurately represent the
expected operating conditions for reliable performance evaluations.
Performance Criteria
This project aims to evaluate the performance of the robot in the Webots simulation environment based on
three key criteria. The first criterion is Map Accuracy, which assesses the accuracy of the generated map by
comparing it with ground truth data. The second criterion is Localization Accuracy, which measures how
closely the robot's estimated position aligns with the actual ground truth position. The third criterion is
Obstacle Detection and Avoidance, which evaluates the effectiveness of the robot's algorithms in detecting
and avoiding obstacles to ensure collision-free navigation. By considering these criteria, a thorough
assessment of the robot's mapping accuracy, localization accuracy, and obstacle detection capabilities in the
simulation environment can be conducted.
Literature Review
On one hand, the field of mobile robotics has seen exponential growth in recent years, with
many companies venturing into the development and deployment of autonomous robots for a wide
range of applications. One such company, Iva Systems (now known as Amazon Robotics), utilizes a
fleet of autonomous mobile robots for warehouse fulfillment. These robots navigate through the
warehouse, picking and transporting items to be packaged for shipment. Another company, Nuro,
uses autonomous delivery vehicles for the transportation of groceries and other goods, offering a
convenient and efficient solution for customers who can receive their items without having to leave
their homes. Starship Technologies uses small, sidewalk-navigating robots for local delivery in
densely populated urban areas, offering a low-cost and environmentally friendly solution. Udelv, on
the other hand, utilizes autonomous delivery vans for grocery and other goods delivery, providing
a larger capacity for deliveries. Savioke has created autonomous robots to deliver items within
hotels, improving the guest experience by providing quick and efficient service. NayaTech has
developed drones for last-mile delivery in urban areas, offering a fast and efficient solution for
delivering items directly to customers. Eliport uses autonomous drones for medical sample delivery
in hospitals, providing a safe and reliable way to transport sensitive materials. Boxbot has
implemented autonomous delivery trucks for package delivery, offering a cost-effective solution for
large-scale deliveries. Postmates uses a fleet of autonomous delivery robots for food and goods
delivery, offering a convenient solution for customers. RoboPostman utilizes autonomous robots
for mail and package delivery in residential areas, offering a quick and efficient solution for postal
services. Each of these companies offers unique advantages and disadvantages in terms of cost,
efficiency, and reliability. However, they all demonstrate the potential for mobile robotics to
revolutionize various industries and make our lives easier.
On the other hand, SLAM is a widely studied problem in robotics, with a significant impact
on autonomous navigation. Researchers have proposed various techniques for SLAM, using a range
of sensor types. One example is using laser sensors, as proposed by (Eliazar and Parr, 2003), which
can produce detailed maps but may be affected by shiny or black objects. Another approach is using
sonar sensors, as proposed by (Zunino and Christensen, 2001), which are low-cost and have low
4
computational complexity but lack fine-grained information. Other examples include bio-sonar
(Steckel and Peremans, 2013), which has high intelligent interaction capability but struggles in
complex environments, and vision-based SLAM (Irie et al., 2012), which can acquire more
information but is sensitive to shadows and illumination conditions. Despite the limitations of
individual sensors, some researchers propose using multiple sensors for improved accuracy. SLAM
algorithms are utilized in several robot projects that are currently under development, including
Scriba, Mapbot, Sparki, BOBO, and Autonomous Home Robot. Each project has its unique
components, such as ELP cameras, stepper motors, servomotors, IR or ultrasonic distance sensors,
Matlab, Sparki's onboard servo-mounted ultrasonic distance sensor, Raspberry Pi, Arduino Mega,
Teensy 4.1, LIDAR system, and ROS, to build the robot and run the SLAM algorithm. The
implementation of SLAM varies from project to project, with some using PID control systems for
self-balancing and navigation, while others utilize serial connections to collect environmental data.
Despite the differences, all these projects have one common goal, which is to use the SLAM
algorithm for navigation.
Despite the promising future of mobile robotics, there are still several disadvantages that
need to be addressed. These include the high cost of acquiring and maintaining the robots, the
need for specialized infrastructure, and the limitations of current technology. Currently, the
cost of autonomous delivery robots is relatively high, making it difficult for some companies to
implement them. In addition, the technology required to support autonomous delivery robots is
still developing and the reliability and robustness of these systems are not yet at the level of human
drivers. The regulations and infrastructure for autonomous delivery robots are also not yet fully
developed, presenting additional challenges for companies looking to implement them. This is
where this project comes in. The goal of this project is to utilize SLAM algorithm to overcome the
limitations faced by current autonomous robots in the industry. The SLAM algorithm will allow
the robot to map its surroundings in real time and keep track of its location, even in changing
environments. This will greatly enhance the reliability and robustness of the robot, reducing the
chances of failure and improving the efficiency of its operations. The SLAM algorithm will also
allow the robot to dynamically adjust its trajectory based on its surroundings, ensuring that it
stays on track even in changing environments. Furthermore, the use of SLAM will improve the
safety of the robot, as it will be able to detect and avoid potential hazards in its path. Besides, the
Festo Robotino mobile robot has been used in many projects, each showcasing its unique
capabilities and applications. Some examples of Festo Robotino mobile robot projects include the
development of an autonomous inspection robot, a cooperative multi-robot system, and a delivery
robot. Each project utilizes the Festo Robotino's advanced features, such as its robustness,
versatility, and reliability, to achieve its specific goals. Despite the differences in their applications,
all these projects share the common goal of utilizing Festo Robotino's capabilities to solve real-
world problems and make a positive impact on society. Whether it's through improving the
efficiency of industrial processes, reducing human workload in hazardous environments, or
delivering goods more conveniently and efficiently, the Festo Robotino is an innovative and
valuable tool for researchers and engineers. With its ability to integrate with other technologies
and its open-source software architecture, the Festo Robotino has the potential to continue pushing
the boundaries of what is possible in the field of robotics. Overall, the Festo Robotino is a versatile
and reliable platform that is well-suited to a wide range of applications and has proven itself to be a
valuable tool in the advancement of robotics.
Theoretical Background
Festo Robotino
5
The Festo Robotino is a highly maneuverable mobile robot designed for use in a variety of
applications as shown in Fig.1. This versatile robot is equipped with three Omni wheels and
independent motors, allowing it to move in any direction with precision and control. Additionally,
the Robotino features a sturdy circular stainless-steel frame and a rubber protection strip with
built-in collision protection sensors. The robot also includes nine infrared distance sensors, two
inductive analog sensors, two digital optical sensors, a camera, and the ability to integrate
additional electrical components via an I/O interface. With its advanced features, the Festo
Robotino is ideal for tasks such as following predefined paths, recognizing and avoiding obstacles,
and transporting payloads.
The control system of Robotino is a sophisticated and highly advanced system that allows for
precise navigation and maneuvering within complex environments. The control system is
comprised of a 32-bit microcontroller, which provides motor control, as well as multiple sensors,
including infrared distance sensors, inductive and optical sensors, and a color camera. This sensor
suite enables the Robotino to perceive its surroundings and navigate with high accuracy.
Additionally, the Robotino features a premium or basic edition embedded PC, depending on the
specific needs of the application, and various I/O interfaces for integrating additional electrical
components. The Robotino's control system is designed to be flexible and adaptable and can be
further developed and customized to meet the specific requirements of each project.
Robot kinematics deals with the motion and transformations of robots, and it is a crucial aspect of
the design and control of the Festo Robotino. The Robotino is equipped with three omnidirectional
wheels that provide a high degree of maneuverability, allowing the robot to move in any direction.
The wheels are independently controlled by motors, which enable the Robotino to navigate
complex environments with precision and control. To understand the kinematics of the Festo
Robotino, it is important to consider both the geometry of the robot and the mathematical models
that describe its motion. These models are used to calculate the robot's velocity and acceleration, as
well as its position and orientation in the environment. The control system of the Robotino can use
this information to make real-time decisions and execute actions, making the robot highly
responsive and agile. In conclusion, the kinematics of the Festo Robotino plays a vital role in its
design, control, and operation, enabling the robot to move and manipulate objects in its
environment with ease.
6
Robot sensor and control system
The Festo Robotino is a mobile robot system with a diameter of 450 mm and a height of
290 mm including the controller housing. It has a total weight of approximately 20 kg (without the
mounting tower) and can carry a maximum payload of 30 kg. The robot is equipped with a circular
stainless steel frame that features an omnidirectional drive, allowing it to move in all directions.
The frame also includes a rubber protection strip that has a built-in collision protection sensor.
The robot has nine infrared distance sensors, one inductive sensor, and two optical sensors
that help it to detect its surroundings and avoid obstacles. It also has a color camera with full HD
1080p resolution and USB interface that can be used for visual monitoring and navigation. The
premium edition of the robot comes with a mounting tower that has three mounting platforms,
making it highly versatile and suitable for a wide range of applications.
The Festo Robotino has an embedded PC to COM Express specification and comes in two
editions - the premium edition with an Intel i5 processor, 2.4 GHz, dual-core, 8 GB RAM, and 64
GB SSD, and the basic edition with an Intel Atom processor, 1.8 GHz, dual-core, 4 GB RAM, and
32 GB SSD. It also has WLAN connectivity to specification 802.11g/802.11b as a client or access
point, which makes it easy to communicate with other devices.
The robot has a motor control system with a 32-bit microcontroller and free motor
connection, and it has 2 Ethernet ports, 6 USB 2.0 (HighSpeed) ports, 2 PCI Express slots, and 1
VGA port. It also has a 1x I/O interface that can be used for integrating additional electrical
components. The Festo Robotino is a highly advanced and versatile mobile robot system that can
be used for a wide range of applications. As we can see in figure 2.
In this section, a comprehensive mathematical analysis of the mobile robot's kinematics and
dynamics is presented. The focus is on the kinematics of the robot body and the dynamics of the DC drives.
The kinematic model of the omnidirectional drive mobile robot can be derived using the following
equation. The angles correspond to the robot drive wheel distribution, with α1 = 60 degrees, α2 = 180
degrees, and α3 = -60 degrees. By separating the coefficients from the expressions, a Jacobian matrix J can
be defined.
7
− sin(α1) cos(α1) 𝑟
𝐽 = [ sin(α2) cos(α2) 𝑟]
− sin(α3) cos(α3) 𝑟
𝜋 𝜋 5𝜋
r is the
where angles correspond to robot drive wheel distribution: α1 = 60° = 3 , α2 = 180° = − 3 , α3 = −60° = 3
wheel’s radius [m],L is the distance between the center of the robot base and the center of the wheel [m].
𝑥
𝑣 = [ 𝑦 ] = 𝑅(θ ). 𝐽−1 . 𝜔
θ
Where
By substituting the rotation matrix R(θ) and the constraint matrix J one gets
𝑟 𝑟
− 0
𝑥 cos(θ) − sin(θ) 0 √3 √3 𝜔1
𝑟 2𝑟 𝑟
𝑣 = [ 𝑦 ] = [ sin(θ) cos(θ) 0] . − . 𝑓. [𝜔2]
θ 0 0 1 3 3 3 𝜔3
𝑟 𝑟 𝑟
[ 3𝐿 3𝐿 3𝐿 ]
𝑟1
• f= , is the gear ratio for Robotino
16
The robot is composed of a platform, including an underframe and chassis, with three omnidirectional
wheels. These wheels are positioned at an angle of 120 degrees to each other, and each wheel is powered
independently by one DC motor through a planetary gearbox and a toothed belt. The gearing mechanism is
replaced by a single general belt gear, as shown in figure 3, which decomposes wheel-pitch circumferential
velocities.
8
SLAM Algorithms
The core of the SLAM algorithm lies in the Bayesian filter equation:
Equation 1
𝑷(𝒙_𝒌, 𝒎 | 𝒛_𝟏, … , 𝒛_𝒌, 𝒖_𝟏, … , 𝒖_𝒌) = 1/𝑍 ∗ 𝛱(𝑖 = 1 𝑡𝑜 𝑘) 𝑝(𝑧_𝑖 | 𝑥_𝑘, 𝑚, 𝑢_{𝑖 − 1}) ∗ 𝑝(𝑥_𝑘 | 𝑥_{𝑘 − 1}, 𝑢_{𝑘 − 1})
This equation represents the probability of the robot's state (x_k) and the map of the environment (m) given
its sensor measurements (z_1, ..., z_k) and control inputs (u_1, ..., u_k). It consists of two main components:
Product of sensor likelihoods: This part calculates the probability of the robot's sensor measurements given
its state and the map. It takes into account factors such as sensor noise and the correspondence between the
measurements and the map.
Product of motion models: This part calculates the probability of the robot's state given its previous state and
control input. It models the robot's motion dynamics, including uncertainty and constraints.
By applying the Bayesian filter equation, the SLAM algorithm iteratively updates the robot's state and map
based on new sensor measurements. The robot's state is updated using the sensor likelihoods, which refine
the estimate of its position, while the map is updated using the motion models, incorporating new
information about the environment. This recursive nature of the SLAM algorithm allows it to gradually
build a map of the environment while estimating the robot's position, even in the presence of uncertainty. By
integrating sensor measurements and control inputs, SLAM enables the robot to improve its understanding
of the environment over time, making it an essential tool for autonomous systems in various domains.
The SLAM algorithm consists of two main components: front-end processing and back-end processing. The
front-end processing component plays a crucial role in accurately processing the raw sensor data collected
by the robot. It involves several steps, including feature extraction, correspondence matching, and data
association. Feature extraction identifies distinctive features in the sensor data that can serve as reference
points for mapping and localization. These features could be corners, edges, or specific patterns in the
9
environment. Correspondence matching matches the extracted features with those stored in the map,
allowing the robot to determine its relative position within the environment. Data association ensures correct
alignment of the robot's current view with previous views, enabling the construction of an accurate and
consistent map over time.
The output of the front-end processing component is then passed to the back-end algorithms for further
processing and mapping. The back-end algorithms use the processed sensor data to estimate the robot's
position and orientation within the environment, updating the map representation accordingly. This iterative
process allows the robot to refine its map as it moves through the environment.
It is essential to note that the accuracy and reliability of the front-end processing component significantly
impact the overall performance of the SLAM system. Careful consideration must be given to the selection of
sensors and processing techniques to ensure optimal results.
SLAM is a critical area of research in robotics and computer vision, enabling robots to autonomously create
maps of their surroundings and navigate effectively. By continuously updating the map and determining its
position within it, a robot can make informed decisions, avoid obstacles, and successfully navigate complex
environments. The development of robust SLAM algorithms is vital for the advancement of autonomous
systems, paving the way for enhanced capabilities in various applications, including robotics, self-driving
cars, and augmented reality.
In SLAM, the system state is represented by a set of variables, including the robot's position, orientation,
and the locations of features in the environment. These variables are treated as random variables and are
associated with probability distributions. By utilizing probability theory, SLAM algorithms can incorporate
prior knowledge, such as the robot's initial position, and effectively handle uncertainty in sensor
measurements.
Various SLAM algorithms, including particle filters and graph-based methods, leverage probability theory
to model and estimate the system state. Monte Carlo methods, in particular, are often employed to sample
from the posterior distribution and obtain estimates of the system state. This integration of probability theory
enables SLAM algorithms to provide robust and accurate representations of the system state, compensating
for measurement noise and other sources of error.
A key concept in SLAM is the modeling of variables as random variables. These variables can assume
different values based on the principles of probability. For example, the position of a robot can be
represented as a random variable X, and the probability of the robot being at a specific location is denoted as
p(X = x). The sum of probabilities for all possible values of the random variable must equal 1. In discrete
probability functions, this is expressed as:
𝒑(𝑿 = 𝒙)
Equation 2:sum of the probability of all possible values of the random variable must sum up to 1
∑ 𝑃(𝑋 = 𝑥) = 1
𝑥
10
The SLAM problem involves creating a map of the environment, denoted as 𝑀={𝑚1,𝑚1,...,𝑚𝑁}, and
recording the robot's movements over time. This is achieved by capturing the robot's state at each time step,
O(k), its observation vector, Z(t), and control signals, U(t). The interval between each sample is defined as
T, the sampling time. The following equations illustrate the relationship between the robot's state at time t
and its movements:
In summary, probability theory provides a mathematical foundation for SLAM algorithms, enabling the
representation and manipulation of uncertainty. By incorporating probability distributions and Monte Carlo
methods, SLAM algorithms can model the system state, estimate it accurately, and account for measurement
noise and other sources of uncertainty.
11
time, and it can also be used to
create a map of the environment.
This is a visual SLAM algorithm
that uses a lidar sensor to represent - More expensive than
- Very accurate - Can
the environment. Lidar sensors can other SLAM algorithms -
Lidar SLAM handle complex
provide accurate measurements of Not as fast as other SLAM
environments
distance, which makes them well- algorithms
suited for SLAM applications.
This is a probabilistic SLAM
- More complex to
algorithm that uses a Monte Carlo - Very accurate - Can
Monte Carlo implement - Can be slower
approach to estimate the pose of the handle complex
SLAM than other SLAM
robot and the map of the environments
algorithms
environment.
This is a non-probabilistic SLAM
- Not as accurate as other
algorithm that uses the ICP
Iterative Closest SLAM algorithms - Not as
algorithm to estimate the pose of - Fast - Easy to implement
Point (ICP) good at handling complex
the robot and the map of the
environments
environment.
This is a non-probabilistic SLAM
- More complex to
algorithm that uses the bundle - Very accurate - Can
Bundle implement - Can be slower
adjustment algorithm to estimate handle complex
Adjustment than other SLAM
the pose of the robot and the map environments
algorithms
of the environment.
This is a hybrid SLAM algorithm - More complex to
Graph-based - Very accurate - Can
that combines the strengths of implement - Can be slower
Monte Carlo handle complex
graph-based SLAM and Monte than other SLAM
SLAM environments
Carlo SLAM. algorithms
Visual SLAM (Simultaneous
-Cost-effective and widely - Sensitivity to feature
Localization and Mapping) is a
available sensors. visibility and occlusions.
technique that uses visual sensors,
- Rich environmental -Computational demands
such as cameras, to construct or
information for mapping and processing power
update a map of an unknown
and localization. requirements.
environment while simultaneously
Visual SLAM - Robustness to lighting - Dependence on accurate
estimating the pose of a robot
changes. camera calibration.
within that environment. It relies
- Capable of large-scale - Limited performance in
on visual features, such as keypoints
mapping. low-texture environments.
or landmarks, to track the robot's
-Non-invasive and - Accumulation of drift
motion and determine its position
contactless. over time.
and orientation.
Table 1 SLAM Algorithm Kinds
Once the Simultaneous Localization and Mapping (SLAM) algorithm has been executed to construct or
update a map of the environment and estimate the robot's pose, the accuracy of the SLAM-based system can
be further improved by applying the Extended Kalman Filter (EKF). After completing the SLAM process,
the EKF can be employed as a post-processing step to refine the estimated robot pose and map. The EKF
12
operates by fusing additional sensor measurements and incorporating them into the belief state estimation
process. The primary benefit of using the EKF after SLAM is its ability to handle non-linearities and
uncertainties in the system's dynamics and measurements. The SLAM algorithm, while capable of
producing reasonably accurate results, may still exhibit some level of error and uncertainty. The EKF can
help mitigate these issues and enhance the accuracy of the estimated robot pose and map.To apply the EKF
after SLAM, the current estimated state from the SLAM algorithm serves as the initial belief state for the
EKF. The EKF then incorporates subsequent sensor measurements, such as additional range or bearing
measurements, to update and refine the belief state estimate. The EKF uses its motion and measurement
models, along with the sensor data, to iteratively adjust the state estimate and reduce the effects of noise and
uncertainties.By incorporating the EKF after SLAM, the system can benefit from the EKF's ability to handle
non-linearities and model uncertainties, leading to improved accuracy and reliability. The EKF's iterative
estimation and correction process can further enhance the localization accuracy of the robot and the quality
of the constructed map. The effectiveness of applying the EKF after SLAM depends on various factors,
including the specific characteristics of the environment, the quality and type of sensor measurements
available, and the accuracy of the initial SLAM estimate. Additionally, the selection of appropriate motion
and measurement models for the EKF plays a crucial role in achieving optimal results.
In summary, integrating the Extended Kalman Filter (EKF) as a post-processing step after executing the
SLAM algorithm can help enhance the accuracy and reliability of the estimated robot pose and map. By
utilizing the EKF's capabilities in handling non-linearities and uncertainties, the system can achieve
improved localization accuracy and better map quality, leading to enhanced performance in various robotics
applications.
With past data and correcting the prediction based on the new measurement. By combining the
predicted state and the measured state, the EKF provides a more accurate estimate of the robot’s location,
which is essential for navigation and localization in the SLAM algorithm. The EKF helps the robot
overcome any errors in its measurement by updating its estimates of the system state at each time step,
leading to a more reliable representation of the environment and the robot's path over time. The equations in
the EKF algorithm describe a process for estimating the state of a non-linear system, such as a robot, given
some measurements and control inputs. The algorithm starts by initializing the state estimate and its
covariance (X(0) and P(0)) and then goes through several steps to refine the estimate at each time step. The
steps are:
o Predict the state at time 𝓉:
𝑋(𝓉|𝓉 − 1) = 𝑓(𝑋(𝓉 − 1), 𝑈(𝓉 − 1))
Equation 6 Predict the state at time
Here, 𝑓(𝑋(𝓉 − 1), 𝑈(𝓉 − 1)) is a function that predicts the state of the system based on the previous
state X(𝓉 -1) and control inputs U(𝓉 -1).
This step calculates the Jacobian matrix F(𝓉) which describes the linear approximation of
the non-linear system dynamics at the predicted state X(𝓉 | 𝓉 -1).
13
where Q(𝓉) is the process noise covariance. This step predicts the uncertainty in the state
estimate by propagating the covariance of the previous state estimate through the linearized
system dynamics.
where R(𝓉) is the measurement noise covariance. Here, h(X(𝓉 | 𝓉 -1)) is a function that
predicts the measurement based on the state estimate and R(𝓉) is the uncertainty in the
measurement.
The Kalman gain is a factor that determines how much the measurement should adjust the
state estimate. It depends on the uncertainty in the state estimate (P(𝓉 | 𝓉 -1)), the
measurement prediction (h(X(𝓉 | 𝓉 -1))), and the measurement noise (R(𝓉)).
This step updates the state estimate by combining the predicted state and the
measurement.
This step updates the covariance of the state estimate by taking into account the measurement and the
Kalman gain. 𝓉 (𝓉), is a weight that reflects the confidence in the measurement, Z(𝓉), compared to the
prediction, X(𝓉 | 𝓉 -1). The Kalman gain is used to correct the state estimate by taking into account the
measurement. In other words, the state estimate is updated by a weighted combination of the prediction and
the measurement, where the weight is given by the Kalman gain. If the measurement is highly confident, the
Kalman gain will be high and the prediction will be corrected significantly by the measurement. If the
measurement is not confident, the Kalman gain will be low and the prediction will not be corrected much by
the measurement. where Kalman gain is a mathematical term used in control theory. It represents the
weighting factor between the predicted state and the measured state. It is used odetermine how much of each
estimate should be used to update the state estimate. The Kalman gain is computed based on the covariance
matrices of the predicted and measured states and the measurementnoise. The EKF algorithm repeats these
steps for each time step to continuously refine the state estimate and its covariance.
In this project, we will utilize Visual SLAM, GraphSLAM, and ORB SLAM algorithms to compare their
performance for localization and navigation in a delivery robot. The GraphSLAM algorithm will be
employed to construct a map of the environment and estimate the robot's position, while the Extended
Kalman Filter (EKF) will refine and enhance the accuracy of the position estimate. The robot will operate
autonomously, navigating from its initial position to a target location while avoiding obstacles and static
objects. When a network connection is available, the robot will utilize it for communication and receive
additional localization information. However, in the absence of a stable network connection, the robot will
rely on the preexisting map data and the SLAM algorithms to determine its position and plan a path to the
target. The robot's motion control capabilities will be utilized to adapt its path based on new sensor data. The
primary focus of this project will be on developing reliable networking solutions to ensure the robustness of
the robot's navigation and delivery mission, even in scenarios involving network disconnections.
14
Graph slam :
What is graph SLAM?
Graph SLAM is a type of SLAM algorithm that represents the environment as a graph. The nodes of the
graph represent the robot's poses, and the edges of the graph represent the spatial constraints between the
poses. These constraints naturally arise from odometry measurements and from feature observations or scan
matching.
Add a constraint (edge) between the robot pose and the landmark in the graph
Assign the measurement error to the constraint
Optimize the graph to estimate the robot's trajectory and landmark positions
Apply a graph optimization algorithm (e.g., Gauss-Newton, Levenberg-Marquardt)
end for
Extract the optimized trajectory and landmark positions from the graph
In this pseudo code, the graph represents the structure that holds the robot's trajectory and the positions of
observed landmarks. At each time step, the robot's motion is predicted using odometry readings, and the
robot's pose is updated in the graph. For each observed landmark, the measurement model is computed
based on the sensor readings, and the measurement error is calculated by comparing the predicted and
observed landmark positions. The landmark node is added to the graph if it doesn't exist, and a constraint
(edge) is added between the robot pose and the landmark in the graph. After processing all sensor
measurements, the graph is optimized using a graph optimization algorithm to minimize the measurement
errors and obtain the best estimate of the robot's trajectory and landmark positions. Finally, the optimized
trajectory and landmark positions can be extracted from the graph for further analysis or visualization.
15
What are the disadvantages of graph SLAM?
Graph SLAM also has some disadvantages. First, it can be computationally expensive to solve the
optimization problem. Second, the graph can become very large and complex, which can make it difficult to
maintain and update.
Data association: The SLAM algorithm must be able to associate sensor measurements with the correct
poses in the graph. This can be difficult, especially in cluttered environments.
Loop closure: The SLAM algorithm must be able to detect and handle loop closures. Loop closures occur
when the robot revisits a location that it has already been to.
Graph management: The SLAM algorithm must be able to manage the graph effectively. This includes
adding new nodes and edges to the graph, and removing old nodes and edges from the graph.
Visual Slam:
Visual SLAM is a type of SLAM algorithm that utilizes visual information, obtained from cameras, depth
sensors, or other image and depth data capturing devices, to track the robot's pose and map the environment.
It consists of a front-end component that extracts and tracks features from the visual data, and a back-end
component that estimates the robot's pose and creates the map using the tracked features. Visual SLAM
offers several advantages, including cost-effectiveness, applicability to various environments, and the ability
to simultaneously track pose and map the environment. However, it also has drawbacks, such as increased
sensitivity to noise and errors, challenges in feature tracking in cluttered environments, and potential
computational complexity when dealing with a large number of features.
Comparison between popular visual SLAM algorithms to choose suitable one for the project:
ORB-SLAM Not as
versatile as
Indoor and
Accurate, some other
Moderately outdoor
Single Accurate, Moderately robust, able SLAM
computationally environments,
camera robust complex to work in methods, can
expensive autonomous
real time be more
navigation
difficult to
use
16
RGB-D SLAM More
More complex and
Camera Indoor and accurate than computationa
More lly expensive
with RGB More More outdoor monocular
computationally than
and depth accurate complex environments, SLAM, can
expensive monocular
sensors complex tasks build more
detailed maps SLAM
Most complex
Most and
Visual-Inertial Combines the
challenging computationa
SLAM Most strengths of
IMU + Most environments, lly expensive
Most complex computationally visual SLAM
camera accurate highest level of type of SLAM
expensive and IMU-
accuracy and
based SLAM
robustness
else:
Match the current frame's descriptors with the previous frame's descriptors
Perform feature matching and filtering (e.g., using RANSAC)
17
if loop closure is detected:
Perform loop closure detection and correction
Optimize the map by adjusting the camera poses and 3D points
Update the current frame as the previous frame for the next iteration
end for
In this pseudo code, ORB Visual SLAM utilizes the ORB (Oriented FAST and Rotated BRIEF) features for
feature detection and description. It processes a sequence of video frames and builds a map of the
environment while estimating the camera poses. For each frame, ORB features are detected and descriptors
are computed. If it's the first frame, the map is initialized with a keyframe containing the detected features
and descriptors, and the initial camera pose is set. For subsequent frames, feature matching is performed
between the current frame's descriptors and the previous frame's descriptors. If enough matches are found,
the camera pose is estimated using the matched features, and bundle adjustment is performed to refine the
camera poses and 3D points. If loop closure is detected, loop closure detection and correction are performed
to handle revisited areas. The map is optimized by adjusting the camera poses and 3D points to improve
consistency. Keyframe selection criteria are applied to determine when to add a new keyframe to the map.
Redundant keyframes are culled to optimize the map and improve efficiency. The process continues until all
frames are processed, resulting in a map representation and estimated camera poses.
Fast Slam:
FastSLAM is a probabilistic SLAM algorithm that combines Monte Carlo localization (MCL) with a Rao-
Blackwellized particle filter. It maintains a belief distribution over the robot's pose and the environment map
using a set of particles, where each particle represents a hypothesized pose and map. The algorithm updates
the particles through a motion update step, where the particles are updated based on the robot's motion
model, and a measurement update step, where the particles are updated based on sensor measurements.
FastSLAM offers advantages such as efficiency, robustness to sensor noise, and the ability to
simultaneously track pose and map. However, it can have computational challenges during initialization,
sensitivity to the number of particles chosen, and difficulties in tracking pose in highly dynamic
environments.
18
Estimate the robot's pose using the weighted average of the particle poses
end for
Extract the final map from the particles with their associated weights
In this pseudo code, FastSLAM uses a set of particles to represent possible robot poses and their associated
maps. At each time step, the robot's motion is predicted using odometry readings, and the particle poses are
updated based on a motion model. For each particle, the algorithm processes the observed landmarks. The
measurement model is computed, and the likelihood of the landmark measurement given the particle's pose
is evaluated. The particle's weight is updated based on the measurement likelihood. After updating the
particle weights, a resampling step is performed to select new particles for the next iteration. The probability
of selection is proportional to the particle's weight. The robot's pose is estimated by computing the weighted
average of the particle poses, providing an estimate of the robot's position. Finally, the map is updated by
associating each observed landmark with the highest-weighted particle and updating its position in the map.
EKF slam:
EKF SLAM is a SLAM algorithm that utilizes an extended Kalman filter (EKF) to estimate the robot's pose
and map the environment. The EKF is a recursive filter that can handle noisy sensor measurements and
estimate the state of a dynamic system. In EKF SLAM, a belief distribution represented by a Gaussian
distribution is maintained, and it is updated in two steps. The motion update incorporates the robot's motion
model into the distribution, while the measurement update incorporates sensor measurements. EKF SLAM
offers advantages such as real-time efficiency, relative ease of implementation, and simultaneous tracking of
pose and mapping.
else:
Retrieve the landmark's previous estimate from the map
Compute the expected measurement based on the current robot pose and landmark estimate
Compute the measurement Jacobian matrix
Update the landmark's estimate using the Extended Kalman Filter equations:
- Compute the Kalman gain
- Compute the measurement residual
- Update the landmark's estimate based on the Kalman gain and measurement residual
19
Update the robot's pose and covariance using the Extended Kalman Filter equations:
- Compute the motion Jacobian matrix
- Compute the motion residual
- Update the robot's pose and covariance based on the motion Jacobian and residual
end for
In this pseudo code, EKF SLAM estimates the robot's pose and landmark positions in an environment using
an Extended Kalman Filter. At each time step, the robot's motion is predicted using odometry readings, and
the robot's pose is updated based on a motion model. For each observed landmark in the sensor
measurements, the algorithm checks if the landmark is new or already in the map. If it's a new landmark, it
is added to the map with an initial estimate. The expected measurement is computed based on the current
robot pose and landmark estimate, and the measurement Jacobian matrix is computed. The landmark's
estimate is updated using the Extended Kalman Filter equations, which involve computing the Kalman gain,
measurement residual, and updating the estimate based on the gain and residual. Similarly, the robot's pose
and covariance are updated using the Extended Kalman Filter equations, considering the motion Jacobian
matrix and motion residual. The process continues until all sensor measurements are processed, resulting in
an estimated map of landmarks and the robot's trajectory.
LandMarkes:
In EKF SLAM (Extended Kalman Filter SLAM), landmarks are distinctive features or points of interest in
the environment that the robot can observe and use to estimate its own position and orientation. These
landmarks can be objects, landmarks, corners, edges, or key points that have spatial coordinates (e.g., x, y)
in a global or map frame of reference. The EKF SLAM algorithm aims to estimate the positions of these
landmarks and the robot's pose by iteratively incorporating sensor measurements and motion updates.
During the SLAM process, the robot's sensors, such as cameras, lasers, or range finders, detect and provide
measurements of the observed landmarks in the environment. These measurements, combined with the
robot's motion information, are used to update the estimates of both the robot's pose and the landmark
positions. By continuously updating these estimates, EKF SLAM builds an accurate map of the environment
while simultaneously localizing the robot within that map.
Landmarks play a crucial role in SLAM algorithms as they provide essential information for mapping and
localization. They serve as reference points for position estimation, enabling the robot to improve its own
localization by detecting and estimating distances and bearings to these landmarks. Landmarks also facilitate
data association, helping the robot match observed features with known landmarks to determine
measurement correspondences. Moreover, landmarks contribute to map creation by representing the spatial
layout of the environment. The map built using landmark positions can be utilized for navigation, path
planning, and interaction with the surroundings. Landmarks also provide redundancy, ensuring robustness in
the face of sensor noise or temporary unavailability of measurements.
Furthermore, landmarks aid in loop closure detection, which occurs when a robot revisits a previously
visited location. By recognizing landmarks that have been observed before, the robot can close the loop and
improve the consistency of the map. Loop closure helps correct accumulated errors in position estimation
and map representation. Overall, landmarks are vital in SLAM algorithms, offering reliable reference points,
aiding in data association, contributing to map creation, increasing system robustness, and enabling loop
closure detection. They enable accurate localization and mapping in various robotic applications, including
navigation, exploration, and mapping.
The mapping accuracy in SLAM can be evaluated using different metrics depending on the specific
application and requirements. One commonly used metric is the mean landmark error, which measures the
average difference between the estimated positions of landmarks in the map and their ground truth positions.
The mean landmark error can be calculated using the following equation:
where:
- N is the total number of landmarks in the map.
- estimated_position represents the estimated position of a landmark in the map.
- ground_truth_position represents the known or ground truth position of the same landmark.
In this equation, the absolute difference between the estimated position and the ground truth position is
calculated for each landmark, and then averaged over all landmarks to obtain the mean landmark error.
This metric provides a measure of how accurately the SLAM system is able to estimate the positions of
landmarks in the environment. A lower mean landmark error indicates a higher mapping accuracy, meaning
the estimated map aligns closely with the ground truth map.It's important to note that there may be other
metrics used to evaluate mapping accuracy in different SLAM systems, depending on the specific
requirements or constraints of the application.
Localization accuracy in SLAM can be evaluated using various metrics, depending on the specific
requirements and characteristics of the system. One commonly used metric is the distance error, which
measures the difference between the estimated position of the robot and its ground truth position.
where:
- estimated_position represents the estimated position of the robot.
- ground_truth_position represents the known or ground truth position of the robot.
In this equation, the absolute difference between the estimated position and the ground truth position is
calculated to quantify the localization error.
21
Another commonly used metric is the angle error, which measures the angular difference between the
estimated orientation of the robot and its ground truth orientation. The angle error can be calculated as:
where:
- estimated_orientation represents the estimated orientation of the robot.
- ground_truth_orientation represents the known or ground truth orientation of the robot.
Similar to the distance error, the absolute difference between the estimated orientation and the ground truth
orientation is calculated to assess the angular localization error.These metrics provide a measure of how
accurately the SLAM system can estimate the position and orientation of the robot. Lower distance and
angle errors indicate higher localization accuracy, meaning the estimated pose aligns closely with the ground
truth pose. It's worth mentioning that different SLAM systems may employ additional or alternative metrics
to evaluate localization accuracy, based on the specific requirements and constraints of the application.
Ros2 humble :
ROS 2 (Robot Operating System 2) is an advanced software framework consisting of libraries and tools
specifically developed for constructing robot applications. As the successor to ROS 1, ROS 2 has been
designed to offer improved scalability, reliability, and security. Among the various releases of ROS 2, the
eighth version, known as ROS 2 Humble, was launched in May 2022, bringing with it a host of new features
and enhancements.
ROS 2 Humble boasts several notable additions. One prominent addition is the introduction of a new
middleware called FastRTPS, which is a high-performance middleware specially tailored for real-time
applications. Additionally, ROS 2 Humble places a strong emphasis on security, incorporating
enhancements such as support for encryption and authentication. To further enhance usability, ROS 2
Humble includes a range of new tools, including an intuitive graphical user interface (GUI) for efficient
management of ROS 2 projects. Furthermore, the documentation for ROS 2 Humble has been significantly
improved, providing users with comprehensive and accessible resources.
With its focus on reliability, security, and user-friendliness, ROS 2 Humble represents a significant
milestone in the evolution of ROS 2. It serves as an ideal choice for developers seeking a powerful and
scalable middleware to facilitate the development of robotics applications.
RVIZ
Rviz2 is a port of Rviz to ROS 2. It provides a graphical interface for users to view their robot, sensor data,
maps, and more. It is installed by default with ROS 2 and requires a desktop version of Ubuntu to use.
Webots:
Webots is a versatile and free 3D robot simulator that finds application in industry, education, and research.
With its extensive capabilities, Webots allows users to simulate robot behavior in virtual environments.
Equipped with a vast library of robots, sensors, and actuators, Webots employs a physics engine that
faithfully reproduces the real-world characteristics of these components. This empowers users to create
highly realistic simulations of robots and their corresponding environments.
What sets Webots apart is its user-friendly programming features. Users can conveniently program robots
using various languages such as C, C++, Python, Java, MATLAB, and ROS. The inclusion of a graphical
user interface further simplifies the creation and modification of robot programs.Key features of Webots
include its open-source nature, cross-platform compatibility (Windows, macOS, and Linux), extensive robot
library, physics engine for realistic simulations, support for multiple programming languages, and an
intuitive graphical user interface. Webots serves as an invaluable tool for numerous individuals and groups.
Robot developers can leverage it to simulate robot behavior before physical implementation, thereby
identifying and rectifying potential issues in their designs. Researchers benefit from Webots' capability to
study robot behavior in diverse environments, enabling the development of novel algorithms and control
22
techniques. Lastly, educators can utilize Webots to impart robotics knowledge, allowing students to grasp
fundamental principles of robot control and enhance their programming skills.
2D SLAM MRPPT
2D SLAM (Simultaneous Localization and Mapping) with MRPPT (Maximum Likelihood Relative Pose
Transform) is an approach used to estimate the position and map of a robot in a 2D environment using
sensor data. MRPPT is a technique commonly employed in SLAM algorithms to determine the robot's
relative pose (movement) between consecutive time steps.
In 2D SLAM with MRPPT, the robot utilizes various sensors, such as laser range finders or cameras, to
gather data about its surroundings. The sensor data is processed to extract relevant features and landmarks in
the environment, such as walls or objects. The robot then estimates its position and orientation (localization)
using the acquired sensor measurements and previously constructed map.
MRPPT plays a crucial role in this process by estimating the robot's relative pose between consecutive time
steps based on the observed sensor data. It utilizes probabilistic techniques, such as maximum likelihood
estimation, to determine the most likely transformation of the robot's pose between two time steps. By
iteratively applying MRPPT, the robot can incrementally build a map of the environment while
simultaneously updating its localization estimates.
The combination of 2D SLAM and MRPPT enables a robot to autonomously explore and navigate an
unknown 2D environment while simultaneously building a map of its surroundings. This technology finds
applications in various fields, including robotics research, autonomous navigation, and mapping for tasks
such as localization, path planning, and obstacle avoidance.
System Design
System Requirements
In the webots simulation supportid by other platforms, the Festo mobile robot (Robotino) will utilize the
SLAM algorithm to enhance its mapping and localization capabilities. This combination of algorithms
allows the robot to accurately map its environment and estimate its position within the map. Equipped with
sensors such as cameras, and IMU, the robot creates a 3D representation of the surroundings and employs
the maps for obstacle avoidance and efficient movement. Actuators like wheels, motors, and controllers
enable the robot to navigate in any direction and rotate as needed. The robot's networking capability ensures
connectivity for communication and localization, enabling it to receive instructions and updates. In the event
of network disruption, the robot relies on stored map data and SLAM algorithms to determine its position.
The simulation focuses on robust networking solutions and implements a reconnection protocol for
uninterrupted missions. A user interface accessible through the Festo web platform enables remote control,
monitoring, and configuration. Safety measures, including obstacle detection sensors and emergency stop
mechanisms, prioritize the robot's stability and prevent collisions. Regular software testing and updates
address security vulnerabilities, while a power management system optimizes battery consumption for
maximum autonomy. The webots simulation provides a realistic environment to evaluate the performance of
the Festo mobile robot equipped with different
type of SLAM, and other relevant algorithms.
Methodology
To apply the project involving Robotino and SLAM algorithms, start by familiarizing yourself with
Robotino and learning about SLAM algorithms. Choose suitable software tools or libraries for implementing
SLAM on the Robotino platform. Collect sensor data, implement the selected SLAM algorithm, and
evaluate the accuracy of the system by calculating error metrics. Use the SLAM algorithm to build and
update a map of the environment, and iterate on the implementation to improve accuracy and mapping
quality.
23
System Block Diagram
The system block diagram for the delivery robot includes a power block, sensor and camera block, SLAM
blocks, data storage block, API interface, controller block, actuating unit (motor driver), and feedback loops.
The power block provides the necessary power supply, while the sensor and camera block gather
environment information. The SLAM process and refine the data, which is stored in the data storage block
and connected to the API interface. The controller block receives information from the API interface and
data storage block to control the robot's motion through the actuating unit. Feedback loops monitor and
adjust system performance. The project focuses on developing robust networking solutions to ensure reliable
navigation and delivery, even in the event of network disconnection. In such cases, the robot relies on saved
map data and SLAM algorithms to determine its position and calculate accuracy.
Flow Chart
This flowchart represents the integration of SLAM and for a delivery robot in the context of a project
focused on developing a robust, autonomous delivery robot. The robot, equipped with both the SLAM
algorithm, will be capable of traveling from a home position to a target location while avoiding obstacles
and other static elements, either by
relying on network information or previously saved map data. The main goal of the project is to ensure the
reliability of the robot's navigation and delivery mission even in case of network disconnection, through the
development of robust networking solutions. The flowchart outlines the steps involved in the robot's
localization and navigation process, including initialization, network connection check, map construction
and estimation of the robot's position, refinement of the position estimate, path planning, navigation to the
target location, and adjusting the path based on new sensor information. The flowchart starts with
initializing the robot's position and map, which involves determining the starting location of the robot and
creating an initial map of the environment using prior knowledge or the robot's sensors. The initial robot
position and map data are then stored.
The next step is to check for a network connection. If the connection is available, the robot receives
additional information for localization. However, if the connection is lost or unstable, the robot relies on its
saved map data. The SLAM algorithm is then used to construct a map of the environment and estimate the
robot's position within the map. The robot then plans a path to the target location and navigates to it while
24
avoiding obstacles. If new sensor information is available, the robot adjusts its path accordingly. This
process is repeated until the target location is reached.
In conclusion, the flowchart is designed to ensure that the robot can complete its mission, even if a network
connection is lost or unstable. The focus of this project is on developing robust networking solutions and
integrating SLAM for a delivery robot to provide reliable navigation and delivery services. The SLAM
process begins with the initialization of the robot's position and a rough map of the environment. The robot
uses its onboard sensors such as odometry or laser rangefinder readings to gather information about its
surroundings. This information is then used to construct a graph representation of the environment. Graph
optimization algorithms are applied to estimate the robot's position within the environment map. As the
robot continues to move and gather more information, the map and the robot's position estimate are updated
accordingly. This process is repeated until the robot reaches its target location.
IMPLEMENTION :
Designing and Implementing the Experimental Setup
The project initially involved working with the real Robotino robot for the implementation of the SLAM
algorithm. However, due to certain challenges and limitations encountered during the early stages of the
project, a decision was made to switch to a simulation software. After extensive research and exploration of
various options, the webots simulation software was chosen as the platform to continue the project.
Webots provides a realistic and controlled virtual environment that closely mimics real-world scenarios,
allowing for thorough testing and evaluation of the robot's capabilities. It offers a wide range of features and
functionalities, including accurate physics simulation, sensor modeling, and flexible programming
interfaces, making it an ideal choice for this project.
The transition from working with the real Robotino to using webots required adapting the existing codebase
and integrating it into the simulation environment. The Robotino's hardware components, such as sensors
and actuators, were emulated within webots to replicate the robot's functionality accurately.
By leveraging webots, the project team was able to continue the development and evaluation of the SLAM
algorithm in a more controlled and efficient manner. The simulation environment provided the flexibility to
25
create various test scenarios, adjust parameters, and collect data for analysis, enabling comprehensive
validation and optimization of the SLAM algorithm's performance.
Additionally, webots offered the advantage of easy scalability, as multiple instances of the simulated
Robotino robots could be deployed simultaneously for parallel testing and comparison of different SLAM
algorithms, such as visual SLAM, graph SLAM, and ORB SLAM.
Overall, the decision to switch to webots as the simulation software brought significant benefits to the
project, including increased flexibility, scalability, and efficient development and testing processes. It
enabled the project team to overcome the challenges faced with the real Robotino and continue making
progress towards achieving the project's objectives.
This project implemented as following step :
I chose ROS2 (Robot Operating System 2) for my project due to its numerous advantages and the humble
qualities it possesses. Here are the reasons why I made this choice:
26
1. Flexibility and Scalability: ROS2 offers a flexible and scalable framework that caters to the diverse needs
of my project. It provides a modular architecture, allowing me to choose the specific components and
features that are most suitable for my application. Whether I'm working on a small-scale project or a large-
scale deployment, ROS2 can adapt and scale accordingly.
2. Improved Performance: ROS2 introduces several performance enhancements compared to its predecessor,
ROS1. It utilizes a more efficient communication middleware called Data Distribution Service (DDS),
which enables faster and more reliable data exchange between system components. This improved
performance is crucial for real-time and mission-critical applications where timing and reliability are
paramount.
3. Enhanced Security and Reliability: ROS2 incorporates important updates to enhance the security and
reliability of robotic systems. It introduces a more secure communication layer, supports encryption, and
implements authentication mechanisms, making it suitable for projects that require robust security measures.
Additionally, ROS2's fault tolerance features help ensure system resilience in the face of errors or failures.
4. Growing Ecosystem and Community: ROS2 has gained significant traction and is backed by a rapidly
growing community of developers, researchers, and robotics enthusiasts. This expanding ecosystem means
there are abundant resources, libraries, and tools available, making it easier to develop, test, and deploy my
project. The collaborative nature of the community also provides an opportunity to learn from experts and
receive support when encountering challenges.
5.Long-term Viability: ROS2 is designed with long-term viability in mind. Its development is supported by
Open Robotics, a non-profit organization committed to advancing open-source robotics. This ensures
ongoing development, maintenance, and support for ROS2, giving me confidence in its sustainability and
longevity.
6. Interoperability and Integration: ROS2 has improved interoperability and integration capabilities,
allowing me to seamlessly connect with a wide range of hardware, software, and robotic systems. It supports
various communication protocols, interfaces, and device drivers, enabling me to incorporate different
components into my project without extensive modifications. This interoperability simplifies integration
efforts and enables easy collaboration with other projects.
Overall, I chose ROS2 for my project because it combines the benefits of a flexible and scalable framework,
improved performance, enhanced security and reliability, a thriving community, long-term viability, and
seamless interoperability. These factors make ROS2 a powerful and humble choice for developing robust,
adaptable, and collaborative robotic systems.Also,Its most package easy to download and work with it .
27
Lastly, using Ubuntu as the host operating system allows me to fully leverage the Linux ecosystem. Ubuntu
is a popular Linux distribution widely used by the ROS community, and many ROS tutorials, resources, and
packages are specifically tailored for Ubuntu. This ensures better compatibility, ease of installation, and
access to a wealth of community support.
In summary, while virtual box can be useful for certain scenarios, I prefer to install Ubuntu directly when
working with ROS2 Humble. It provides better performance, smoother package management, easier
hardware integration, and full access to the Linux ecosystem, making it an ideal choice for developing and
running ROS2 projects.
For MacBook
If you have a MacBook Like me and installing Ubuntu directly is not a feasible option for you, using
VirtualBox to run Ubuntu and ROS2 Humble is a suitable alternative. While there may be some limitations
and challenges associated with running Ubuntu in a virtual environment on a MacBook, it can still allow
you to work with ROS2 effectively. Here's why using VirtualBox on your MacBook can be a practical
choice:
2. Isolation and Safety: Running Ubuntu within a virtual machine provides a level of isolation from your
host macOS environment. It allows you to experiment with different configurations, packages, and ROS2
setups without affecting your MacBook's primary operating system. This isolation ensures a safer
environment for testing and development.
3. Convenience and Portability: VirtualBox offers the advantage of portability. You can create snapshots or
backups of your Ubuntu virtual machine, making it easy to transfer your ROS2 development environment to
other machines if needed. It also provides the convenience of running Ubuntu alongside your macOS
applications, allowing you to switch between environments without restarting your computer.
4. Resource Management: Although running Ubuntu in a virtual machine may have some performance
overhead, modern MacBook models generally have the sufficient processing power and memory to handle
ROS2 applications within VirtualBox. By allocating the appropriate amount of resources (CPU cores, RAM)
to the virtual machine, you can optimize the performance of your ROS2 projects.
5. Compatibility with ROS2: VirtualBox provides a compatible environment for running Ubuntu and ROS2.
Many ROS2 tutorials, packages, and resources are designed to work seamlessly on Ubuntu, and VirtualBox
allows you to create a virtual Ubuntu environment that closely resembles a native installation.
While running Ubuntu and ROS2 in a virtual environment may have some limitations, using VirtualBox on
your MacBook is a practical solution that enables you to work with ROS2 and develop your projects
effectively. It allows you to leverage the capabilities of ROS2 Humble while still benefiting from the
flexibility and convenience of using a virtual machine on your MacBook.
For Windows:
ROS2 Humble is primarily designed to work on Linux-based operating systems. While Windows is not the
officially supported platform for ROS2, there are options available to run ROS2 on Windows. Here are
some important points to consider when using ROS2 Humble on Windows:
1. Windows Subsystem for Linux (WSL): One way to use ROS2 on Windows is by utilizing the Windows
Subsystem for Linux (WSL). WSL allows you to run a Linux distribution, such as Ubuntu, within a
Windows environment. By installing a compatible Linux distribution through WSL, you can then install and
use ROS2 Humble as you would on a native Linux system. However, it's important to note that not all
features and functionalities of ROS2 may be fully supported or optimized in this setup.
28
2. ROS2 Windows Native: ROS2 has been making progress in supporting Windows as a native platform.
Efforts have been made to provide official builds and support for ROS2 on Windows. It is recommended to
check the ROS2 documentation and community forums for the latest information on Windows support,
including installation instructions and compatibility considerations.
3. ROS2 Development Tools: While running ROS2 on Windows may be possible, it's important to note that
some ROS2 development tools and packages may have limited Windows compatibility. This could include
certain ROS2 packages that have dependencies on Linux-specific libraries or utilities. It may require
additional effort and troubleshooting to ensure compatibility or find suitable alternatives for Windows.
5. Community Support: The ROS2 community is active and vibrant, with developers constantly working on
improving compatibility and providing assistance for running ROS2 on Windows. Engaging with the
community forums, discussion groups, and documentation can provide valuable insights, workarounds, and
solutions for specific issues encountered while using ROS2 on Windows.
In summary, while Windows is not the officially supported platform for ROS2, it is possible to run ROS2
Humble on Windows using solutions like WSL or through Windows native support. However, it's important
to be aware of the potential limitations, compatibility challenges, and performance considerations that may
arise when using ROS2 on a non-Linux platform. Keeping up with the latest developments, seeking
community support, and carefully considering the specific requirements of your project will help you
navigate ROS2 on Windows successfully. But in my experiment using a virtual box is more stable.
1. Download VirtualBox: Visit the official VirtualBox website (https://www.virtualbox.org) and download
the version of VirtualBox suitable for macOS.
29
Figure 7 downloaded VirtualBox package
2. Install VirtualBox: Locate the downloaded VirtualBox package (.dmg file) and double-click on it to start
the installation process. Follow the on-screen instructions to complete the installation.
3. Download Ubuntu ISO: Go to the official Ubuntu website (https://ubuntu.com) and download the Ubuntu
Desktop ISO image. Choose the appropriate version based on your requirements (e.g., 64-bit, LTS).
30
Figure 8 upload ubuntu
4. Create a New Virtual Machine: Open VirtualBox, click on the "New" button to create a new virtual
machine. Give it a name (e.g., Ubuntu) and select "Linux" as the type and "Ubuntu (64-bit)" as the version.
Set the desired amount of memory (RAM) for the virtual machine, keeping in mind the system requirements
of Ubuntu.
5. Create a Virtual Hard Disk: Choose the "Create a virtual hard disk now" option and select "VDI
(VirtualBox Disk Image)" as the hard disk file type. Select "Dynamically allocated" for the storage, then
specify the size of the virtual hard disk. The recommended minimum is around 20-30 GB, depending on
your needs.
6. Configure Virtual Machine Settings: With the virtual machine created, select it from the VirtualBox
Manager interface and click on "Settings." Adjust the settings as needed, including the number of CPU
cores, display settings, network configurations, and any additional devices or features you want to enable.
7. Install Ubuntu: With the virtual machine settings configured, select the virtual machine and click on
"Start" to launch it. In the VirtualBox window, click on the "Choose a virtual optical disk file" button and
31
select the Ubuntu ISO you downloaded. The virtual machine will start booting from the ISO file, and you
can follow the on-screen instructions to install Ubuntu.
8. Complete Ubuntu Installation: During the Ubuntu installation process, you'll be prompted to select
installation options, create a username and password, and configure system settings. Follow the installation
wizard until Ubuntu is successfully installed on the virtual machine.
9. Install Guest Additions: After Ubuntu installation, it is recommended to install VirtualBox Guest
Additions. In the VirtualBox window, go to the "Devices" menu and select "Insert Guest Additions CD
image." Follow the on-screen instructions within Ubuntu to install the Guest Additions, which provide
additional features and better integration between the host and guest systems.
10. Start Ubuntu: Once the Guest Additions are installed, restart the virtual machine. Ubuntu should now
start up within the VirtualBox window, and you can log in to your Ubuntu desktop environment.
Congratulations! You have successfully installed VirtualBox and run Ubuntu on your MacBook using a
virtual machine. You can now use Ubuntu within VirtualBox for various purposes, including running ROS2
Humble and developing your projects.
32
On Windows :
To install VirtualBox and run Ubuntu on your Windows machine, follow these steps:
1. Download VirtualBox: Visit the official VirtualBox website (https://www.virtualbox.org) and download
the version of VirtualBox suitable for Windows. Choose the installer based on your operating system
version (e.g., Windows 10, 64-bit).
2. Install VirtualBox: Locate the downloaded VirtualBox executable (.exe) file and double-click on it to start
the installation process. Follow the on-screen instructions to complete the installation. You may need
administrator privileges to install VirtualBox.
3. Download Ubuntu ISO: Go to the official Ubuntu website (https://ubuntu.com) and download the Ubuntu
Desktop ISO image. Choose the appropriate version based on your requirements (e.g., 64-bit, LTS).
4. Create a New Virtual Machine: Open VirtualBox, click on the "New" button to create a new virtual
machine. Give it a name (e.g., Ubuntu) and select "Linux" as the type and "Ubuntu (64-bit)" as the version.
5. Set Memory and Storage: Assign an appropriate amount of memory (RAM) for the virtual machine. The
recommended minimum for Ubuntu is around 2 GB, but more is preferable for smoother performance. Next,
create a virtual hard disk by selecting "Create a virtual hard disk now" and choosing "VDI (VirtualBox Disk
Image)" as the file type. Select "Dynamically allocated" for the storage option.
6. Configure Virtual Machine Settings: With the virtual machine created, select it from the VirtualBox
Manager interface and click on "Settings." Adjust the settings as needed, including the number of CPU
cores, display settings, network configurations, and any additional devices or features you want to enable.
7. Mount Ubuntu ISO: In the VirtualBox Manager, select the virtual machine you created and click on
"Start." In the pop-up window, browse and select the Ubuntu ISO file you downloaded in Step 3. This will
allow the virtual machine to boot from the Ubuntu ISO.
8. Install Ubuntu: The virtual machine will start booting from the Ubuntu ISO, and the Ubuntu installation
process will begin. Follow the on-screen instructions to install Ubuntu, including selecting installation
options, creating a username and password, and configuring system settings. Choose the installation type
that suits your needs (e.g., erase disk and install Ubuntu or manual partitioning).
9. Complete Ubuntu Installation: After the installation completes, restart the virtual machine. Ubuntu should
now start up within the VirtualBox window, and you can log in to your Ubuntu desktop environment.
10. Install Guest Additions: It is recommended to install VirtualBox Guest Additions to enhance the
functionality and integration between the host and guest systems. In the VirtualBox window, go to the
"Devices" menu and select "Insert Guest Additions CD image." Follow the on-screen instructions within
Ubuntu to install the Guest Additions.
Congratulations! You have successfully installed VirtualBox and run Ubuntu on your Windows machine
using a virtual machine. You can now utilize Ubuntu within VirtualBox for various purposes, including
running ROS2 Humble and developing your projects.
1. Visit the ROS website: Open a web browser and go to the official ROS website at https://www.ros.org/.
33
2. Navigate to ROS2 Humble: On the ROS homepage, navigate to the ROS2 section. Look for the version
labeled "ROS2 Humble" or navigate directly to the ROS2 Humble page if provided.
3. Choose Installation Method: Once on the ROS2 Humble page, you will find multiple installation
methods. Since you want to use Debian packages, locate the Debian Packages section.
4. Select Appropriate Distribution: In the Debian Packages section, you will see a list of supported
distributions. Choose the Debian distribution that matches your operating system (e.g., Ubuntu, Debian).
5. Follow the Installation Instructions: Under the chosen distribution, you will find step-by-step instructions
for installing ROS2 Humble using Debian packages. The instructions typically include adding the ROS
repository to your package sources and installing the necessary packages.
a. Add ROS Repository: Follow the provided instructions to add the ROS repository to your package
sources. This typically involves running commands in your terminal to add the repository key and set up the
appropriate package sources.
b. Update Package Lists: After adding the ROS repository, update your package lists by running the
command `sudo apt update` in your terminal. This ensures that your system recognizes the newly added
repository.
c. Install ROS2 Humble Packages : Once the package lists are updated, you can proceed to install ROS2
Humble packages. Follow the instructions to run the appropriate command in your terminal to install the
desired ROS2 packages.
6. Verify the Installation : After the installation is complete, you can verify that ROS2 Humble is properly
installed by opening a new terminal window and running the command `ros2 --version`. This should display
the installed ROS2 version, confirming a successful installation.
Make sure you have a locale which supports UTF-8 . If you are in a minimal environment (such as a docker
container), the locale may be something minimal like POSIX . We test with the following settings. However,
it should be fine if you’re using a different UTF-8 supported locale.
Setup Sources
You will need to add the ROS 2 apt repository to your system.
34
First ensure that the Ubuntu Universe repository is enabled.
ROS 2 packages are built on frequently updated Ubuntu systems. It is always recommended that you ensure
your system is up to date before installing new packages.
ROS-Base Install (Bare Bones): Communication libraries, message packages, command line tools. No GUI
tools.
Environment setup
Sourcing the setup script
Set up your environment by sourcing the following file.
35
# Replace ".bash" with your shell if you're not using bash
# Possible values are: setup.bash, setup.sh, setup.zsh
source /opt/ros/humble/setup.bash
In one terminal, source the setup file and then run a C++ talker :
source /opt/ros/humble/setup.bash
ros2 run demo_nodes_cpp talker
In another terminal source the setup file and then run a Python listener :
source /opt/ros/humble/setup.bash
ros2 run demo_nodes_py listener
Programming
Simulator License Physics engine Features Cons
language
Realistic -Can be slow
physics, large for complex
Webots Open-source C, C++, Python Bullet library of pre- simulations
36
built robots and - Not as
environments scalable as
some other
simulators
- Physics
engine not as
Easy to use, realistic as
wide range of some
features, - Not as
commercial scalable as
version some other
CoppeliaSim Open-source C++, Python Bullet available simulators
-Can be
difficult to use
Scalable, ability - Physics
to simulate engine not as
complex realistic as
Gazebo Open-source C++, Python ODE, Bullet environments some others
Graphical user Not as flexible
Microsoft interface, as some other
Robotics library of pre- simulators
Developer C#, Visual built
Studio Commercial Basic .NET PhysX components
-Not as easy to
use as some
other
Flexible, ability simulators
to simulate a - Physics
wide variety of engine not as
robots and realistic as
SimSpark Open-source C++, Python ODE, Bullet environments some others
Table 4 Robotics simulation software
Install Webots:
Webots is a widely used open-source robotics simulation software developed by Cyberbotics. It provides a
virtual environment for simulating and testing robots in various scenarios, allowing researchers and
developers to evaluate robot designs, algorithms, and behaviors.
1. Visit the Webots website: Open a web browser and go to the official Webots website at
https://www.cyberbotics.com/.
2. Download Webots: On the Webots homepage, locate the "Download" section. Choose the appropriate
version of Webots based on your operating system (Windows, macOS, or Linux) and click on the
corresponding download link.
3. Choose Edition: Webots offers both a free version and a commercial version. Select the edition that suits
your requirements and click on the download link.
37
4. Install Webots: Once the download is complete, locate the downloaded installation package and run it.
The installation process will vary depending on your operating system.
- Windows: Double-click the downloaded .exe file and follow the on-screen instructions to complete the
installation. You may need to specify the installation directory and agree to the license terms.
- macOS: Open the downloaded .dmg file and drag the Webots application to the Applications folder. You
can then launch Webots from the Applications folder or using Spotlight search.
- Linux: Open a terminal and navigate to the directory where the downloaded installation package is
located. Run the installation command appropriate for your distribution. For example, on Ubuntu, you can
use `sudo dpkg -i webots-x.y.z-amd64.deb`, replacing "x.y.z" with the specific version you downloaded.
5. Run Webots: Once the installation is complete, you can run Webots by locating the application icon (in
the Start menu on Windows, the Applications folder on macOS, or the application launcher on Linux) and
clicking on it.
6. Explore Webots: Upon launching Webots, you will be greeted with the main user interface. Familiarize
yourself with the features and functionality of Webots by exploring the provided examples, documentation,
and tutorials available on the Webots website.
1. Familiarize Yourself with Webots: Once Webots is installed, take some time to explore the user interface
and understand its various components. Familiarize yourself with the basic functionalities and navigation
within the software.
2. Create a New World: Open Webots and start a new project. Choose the appropriate template or create a
blank world. This will serve as the environment for simulating the Robotino robot.
3. Import the Robot Model: In Webots, import the Robotino robot model or create a custom model if
needed. Webots supports various file formats such as URDF, PROTO, and VRML. Ensure that the robot
model is accurately represented in the simulation. But we don't need it because robotinto 3 is available in
webots.
4. Configure Robot Properties: Set the necessary properties and parameters for the Robotino robot within
Webots. This may include dimensions, joint limits, sensor configurations, and control algorithms. Refer to
the Robotino documentation or specifications for the required information.
5. Add Sensors and Actuators: Attach sensors and actuators to the Robotino robot in Webots. This allows
the robot to perceive its environment and interact with it. Examples of sensors include cameras, proximity
sensors, and encoders, while actuators can include motors and grippers. Also, Be careful with DEF. name
for sensors and camera and what name sensors in the robot controller.
6. Design the World: Design the virtual world in Webots where the Robotino robot will operate. This
involves placing objects, obstacles, and landmarks that the robot will encounter during the simulation.
Customize the appearance and properties of the world elements to suit your specific project requirements.
7. Implement Robot Control: Develop the control algorithms and logic for the Robotino robot within
Webots. This can be achieved using the built-in controller languages supported by Webots, such as C, C++,
38
Python, or MATLAB. Implement the necessary functionalities for the robot's navigation, perception, and
interaction with the environment.
8. Simulate and Evaluate: Once the world and robot control are set up, initiate the simulation in Webots.
Observe the behavior of the Robotino robot within the virtual environment, analyze its performance, and
evaluate the effectiveness of your control algorithms. Make necessary adjustments and improvements as
needed.
9. Iterate and Refine: Continue iterating and refining the robot's behavior and the virtual world in Webots
based on your project goals and requirements. Test different scenarios, fine-tune parameters, and enhance
the robot's capabilities to achieve desired results.
Remember to refer to the Webots documentation, tutorials, and community resources for more detailed
instructions and assistance throughout the process. Webots provides a comprehensive set of tools and
functionalities to simulate and evaluate the Robotino robot in a virtual environment, enabling you to refine
your robotic applications before deploying them in the physical world.
3.ROS2-Webots :
To connect ROS2 Humble with Webots and download the Webots ROS package, follow these steps:
1. Create a ROS2 Workspace: Set up a ROS2 workspace where you will work with the ROS2 packages.
Open a terminal and use the following command to create a workspace directory:
mkdir -p ~/ros2_humble_ws/src
2. Build and Source the Workspace: Build the workspace by running the following commands in the
terminal:
cd ~/ros2_humble_ws
colcon build --symlink-install
source install/setup.bash
3. Download the Webots ROS Package: Download the Webots ROS package from the Webots GitHub
repository. Open a terminal and navigate to your ROS2 workspace's `src` directory:
cd ~/ros2_humble_ws/src
Clone the Webots ROS package repository using the following command:
git clone https://github.com/cyberbotics/webots_ros.git
4. Build the ROS Package: Build the Webots ROS package by running the following commands in the
terminal:
cd ~/ros2_humble_ws
colcon build --symlink-install --packages-select webots_ros
This will compile and install the Webots ROS package in your ROS2 Humble workspace.
5. Configure ROS2 Environment: Configure your ROS2 environment to include the newly installed Webots
ROS package. Run the following command in the terminal:
source ~/ros2_humble_ws/install/setup.bash
This ensures that ROS2 can find the Webots ROS package and its associated resources.
39
6. Launch Webots with ROS2 Bridge: Launch Webots with the ROS2 bridge by running the following
command in the terminal:
webots --mode=ros2
This command starts Webots with the ROS2 bridge enabled, allowing communication between ROS2 and
Webots.
7. Publish and Subscribe to Topics: With the Webots ROS package and ROS2 bridge set up, you can now
publish and subscribe to ROS2 topics from within Webots. Create a Webots controller that interfaces with
the ROS2 bridge to publish and subscribe to topics using the ROS2 APIs. The Webots ROS package
provides examples and templates that can be used as a starting point for developing your own controller.
By following these steps, you can connect ROS2 Humble with Webots and download the necessary Webots
ROS package. This integration enables seamless communication and data exchange between ROS2 and the
simulated environment in Webots.
For further details and specific use cases or any type of problem, refer to the official Webots documentation
(https://cyberbotics.com/doc/guide/ros2-introduction) and the ROS2 documentation (https://docs.ros.org/).
These resources provide comprehensive information and examples to help you integrate ROS2 with Webots
effectively using the Webots ROS package.
1. Build Bridge between ROS1 and ROS2: Since the Robotino package is built using the catkin build
system, which is compatible with ROS1, you'll need to establish a bridge between ROS1 and ROS2. This
bridge enables communication and data exchange between the two frameworks. You can use the
`ros1_bridge` package provided by the ROS2 ecosystem to achieve this. Follow the ROS2 documentation on
how to install and configure the `ros1_bridge` package.
1. Install ROS1 and ROS2: Ensure that both ROS1 and ROS2 are installed on your system. Follow the
official ROS1 and ROS2 installation instructions for your specific operating system.
2. Create ROS1 Workspace: Set up a ROS1 workspace where you will build the ROS1-ROS2 bridge. Open
a terminal and use the following command to create a workspace directory:
mkdir -p ~/ros1_bridge_ws/src
3. Clone the ROS1-ROS2 Bridge Repository: Navigate to the `src` directory of your ROS1 workspace
(`~/ros1_bridge_ws/src`) and clone the `ros1_bridge` repository from the ROS2 GitHub repository:
cd ~/ros1_bridge_ws/src
git clone https://github.com/ros2/ros1_bridge.git
4. Build the ROS1-ROS2 Bridge: Return to the root of your ROS1 workspace (`~/ros1_bridge_ws`) and
build the ROS1-ROS2 bridge using the following command:
cd ~/ros1_bridge_ws
40
colcon build --symlink-install --packages-select ros1_bridge
This command compiles the ROS1-ROS2 bridge and generates the necessary installation files.
5. Source the Workspace: Source the setup file of your ROS1 workspace to make the ROS1-ROS2 bridge
available in your environment. Run the following command in the terminal:
source ~/ros1_bridge_ws/install/setup.bash
This ensures that the ROS1-ROS2 bridge is properly sourced and available for use.
6. Launch the ROS1-ROS2 Bridge: With the bridge built and sourced, you can now launch the ROS1-ROS2
bridge by running the following command:
This command starts the dynamic bridge, enabling communication between ROS1 and ROS2 nodes.
7. Test the Bridge: To verify that the bridge is functioning correctly, you can run ROS1 nodes that publish
messages and observe them being received by ROS2 nodes, and vice versa. Launch ROS1 and ROS2 nodes,
making sure they communicate with each other through topics, services, or actions.
For example, you can publish a message on a ROS1 topic and confirm that it is received on the
corresponding ROS2 topic, or vice versa.
By following these steps, you can build and launch the ROS1-ROS2 bridge, allowing communication and
data exchange between ROS1 and ROS2 nodes. It enables interoperability between the two frameworks,
allowing you to leverage packages and nodes from both ROS1 and ROS2 ecosystems.
2. Install Robotino ROS Package: Begin by installing the Robotino ROS package, which allows
communication between Robotino and ROS. Follow the instructions provided by the Robotino manufacturer
to download and install the necessary package.
1. Download the Robotino ROS Package: Visit the Robotino website or contact the Robotino manufacturer
to obtain the Robotino ROS package. They usually provide a downloadable package or repository that
contains the necessary files.
2. Create a ROS Workspace: Set up a ROS workspace where you will install and build the Robotino ROS
package. Open a terminal and use the following command to create a workspace directory:
mkdir -p ~/robotino_ws/src
3. Copy the Robotino ROS Package: Copy or move the downloaded Robotino ROS package into the `src`
directory of your ROS workspace (`~/robotino_ws/src`).
4. Build the Workspace**: Navigate to your ROS workspace directory (`~/robotino_ws`) in the terminal and
build the workspace using the following command:
catkin_make
41
This command compiles the Robotino ROS package and any other packages present in your workspace.
5. Source the Workspace: After successfully building the workspace, source the setup file to add the
Robotino ROS package to your ROS environment. Run the following command in the terminal:
source ~/robotino_ws/devel/setup.bash
This ensures that ROS can find the Robotino ROS package and its associated resources.
6. Test the Installation: You can now verify if the Robotino ROS package is properly installed by launching
a sample Robotino ROS node or executing the available Robotino ROS examples. Consult the Robotino
ROS documentation or the provided examples for more information on how to use the package and interact
with Robotino.
3. Upload Required Libraries: If there are any additional libraries or dependencies required by the Robotino
package, ensure they are uploaded to your ROS2 Humble workspace. This ensures that the necessary
components are available for building and running the Robotino ROS package with ROS2.
4. Configure ROS1 and ROS2 Environment: Set up your ROS1 and ROS2 environments to include the
necessary paths and packages for both frameworks. This allows ROS1 and ROS2 to work together
seamlessly. Make sure to source the appropriate setup files for each environment before proceeding.
5. Build the Workspace: Navigate to your ROS2 Humble workspace directory and build the workspace
using the following command:
This command compiles the packages in your workspace and creates the necessary symbolic links for
installation.
6. Launch Robotino and ROS Nodes: Launch the Robotino hardware and the required ROS nodes for Graph
SLAM and localization. Make sure to include the necessary launch files and configurations specific to your
application. This allows Robotino to start sending sensor data and receive commands from the ROS nodes.
7. Implement Graph SLAM and Localization: Develop the Graph SLAM and localization algorithms using
ROS packages such as `gmapping` or `cartographer`. These packages provide tools and libraries for
mapping the environment and estimating the robot's pose within the map. Configure and tune the parameters
according to your specific requirements and environment.
8. Evaluate and Refine: Test the Graph SLAM and localization algorithms by running Robotino in different
environments and observing the generated maps and robot localization accuracy. Analyze the results, iterate
on the algorithms if needed, and fine-tune the parameters to improve the mapping and localization
performance.
By following these steps, you can build a Graph SLAM map and enable localization using Robotino in
conjunction with ROS1 and ROS2. The bridge between ROS1 and ROS2 allows you to leverage the
Robotino ROS package built with Catkin in the ROS2 Humble - webots environment. With the appropriate
libraries and dependencies uploaded and the ROS environment configured correctly, you can develop and
deploy advanced mapping and localization capabilities for your Robotino robot.
42
RViz (ROS Visualization) is a powerful visualization tool within the Robot Operating System (ROS)
ecosystem. It provides a graphical interface for visualizing and interacting with various types of data
generated by robots or simulations. With RViz, users can easily visualize sensor data, robot models,
trajectories, maps, and more, making it an essential tool for robot development, debugging, and analysis.
RViz supports the visualization of point clouds, laser scans, images, 3D models, and robot poses, allowing
users to configure displays for different data types. It seamlessly integrates with the ROS ecosystem,
enabling users to subscribe to and visualize data published on ROS topics. RViz offers interactivity,
allowing for object selection, manipulation, and camera control, empowering users to navigate and explore
the 3D environment. Configuration settings can be saved for easy reuse, making it convenient when working
with multiple robots or different visualization requirements. Overall, RViz enhances the understanding and
visualization of complex robot systems, aiding in algorithm debugging, sensor data verification, motion
analysis, and navigation strategy validation.
To install RViz in ROS2 Humble and connect it with Webots, you can follow these instructions:
1. Create and Build a ROS2 Workspace: Create a new directory for your ROS2 workspace, if you haven't
already, and navigate to it in the terminal. Then, use the following command to create a new workspace:
mkdir -p ~/ros2_ws/src
cd ~/ros2_ws
2. Build the ROS2 Workspace: Build the ROS2 workspace using the following command:
colcon build
3. Install RViz: RViz is not included in the ROS2 Humble distribution by default, but you can install it from
the `ros2/rviz` GitHub repository. Clone the repository and build RViz by executing the following
commands in your workspace directory:
cd ~/ros2_ws/src
git clone https://github.com/ros2/rviz.git
cd ..
colcon build --symlink-install
source install/setup.bash
rviz2
5. Connect RViz with Webots: To connect RViz with Webots, you need to establish a communication
bridge between them. One way to achieve this is by publishing the necessary sensor data from Webots and
subscribing to that data in RViz. Here are the general steps to accomplish this:
- In Webots, modify your robot controller or simulation code to publish sensor data such as laser scans,
point clouds, or odometry information to appropriate ROS2 topics using the `rclcpp` library.
- In RViz, create the necessary visualization configurations to display the sensor data received from
Webots. This typically involves adding LaserScan, PointCloud2, or PoseArray displays and configuring
them to subscribe to the corresponding ROS2 topics.
43
- Ensure that both Webots and RViz are running simultaneously, and the ROS2 communication bridge is
established.
6. Verify Data Visualization: After connecting RViz with Webots, you should be able to visualize the sensor
data from Webots in RViz. Ensure that the published sensor data is correctly received and displayed in RViz
according to your visualization configurations.
1. Ground Truth Data: Obtain ground truth data for the environment or scenario in which you performed the
mapping and localization. This data provides the true positions and maps that can be used as a reference for
comparison.
2. Pose Comparison: Compare the estimated poses of the robot generated by your Graph SLAM system with
the ground truth poses. Calculate metrics such as Root Mean Square Error (RMSE) or Absolute Trajectory
Error (ATE) to quantify the positional discrepancies between the estimated poses and ground truth.
3. Map Comparison: Compare the generated map from your Graph SLAM system with the ground truth
map. Metrics like the Intersection over Union (IoU) or pixel-wise comparison can be used for map
evaluation. These metrics measure the overlap and similarity between the estimated map and the ground
truth map.
4. Localization Accuracy: Assess the accuracy of the robot's localization by analyzing the error in estimating
its position. Calculate metrics such as the positional error or angular error to quantify the localization
accuracy. You can compare the estimated position with the ground truth position at different time intervals
or specific points in the trajectory.
5. Statistical Analysis: Perform statistical analysis on the collected data to understand the distribution of
errors and accuracy metrics. Calculate mean, standard deviation, and confidence intervals to gain insights
into the overall performance of the Graph SLAM system.
6. Visualization: Visualize the results using tools like RViz or custom visualization scripts to observe the
discrepancies between estimated poses and ground truth, as well as the differences between the estimated
map and ground truth map. Visualization helps in identifying patterns and areas of improvement.
7. Iterate and Refine: Analyze the evaluation results and identify areas where the mapping and localization
accuracy can be improved. Fine-tune parameters, adjust algorithms, or consider using different sensors or
sensor fusion techniques to enhance the accuracy.
It's important to note that the choice of evaluation metrics and techniques may vary depending on the
specific requirements and characteristics of your Graph SLAM system and the application domain.
the script that demonstrates how to calculate the Root Mean Square Error (RMSE) for the positional
accuracy of robot localization using ROS2 in the Ubuntu terminal:
#!/bin/bash
44
estimated_poses_file="/path/to/estimated_poses.txt" # change depend on fille location
# Calculate RMSE
squared_errors=0.0
num_poses=${#estimated_poses[@]}
You can follow these steps to build a map and perform localization using ORB-SLAM for Robotino in
ROS2 and Webots:
45
2. Install the necessary packages for ORB-SLAM, such as `geometry_msgs`, `nav_msgs`, `sensor_msgs`,
and `tf2_ros`.
46
Steps need to build fast slam :
MRPT includes various SLAM (Simultaneous Localization and Mapping) algorithms, such as EKF-SLAM,
RBPF-SLAM, and ICP-SLAM, enabling robots to map their environments while estimating their own pose
accurately. The library also supports different localization methods, including Monte Carlo Localization
(MCL) based on particle filters, along with tools for data association and landmark-based localization.
In terms of sensors and perception, MRPT supports a wide array of sensors commonly used in robotics, such
as laser range finders, cameras, and inertial measurement units (IMUs). It provides efficient algorithms for
sensor fusion, feature extraction, point cloud processing, and other essential tasks related to sensor data
processing.
For path planning and navigation, MRPT offers algorithms like A* and D* for finding optimal paths in
static environments. It provides tools for robot navigation, obstacle avoidance, and trajectory planning,
assisting in smooth and efficient robot motion.
MRPT also includes visual simulators, such as the MRPT Scene Viewer, which allows users to visualize and
interact with simulated robot environments. These simulators prove to be valuable for testing algorithms,
simulating robot behavior, and developing robotic applications.
Another noteworthy capability of MRPT is its support for GraphSLAM, a technique that models the
environment as a graph and optimizes the robot's trajectory and map simultaneously. GraphSLAM in MRPT
enables loop closure detection and correction, enhancing mapping accuracy and robustness.
2. Add the MRPT repository to your package sources by executing the following command:
5. During the installation, you may be prompted to confirm the download and installation of additional
dependencies. Enter 'Y' to proceed.
6. Once the installation is complete, you can verify if MRPT is installed correctly by running the following
command to display the MRPT version:
mrpt-config --version
48
Steps to Build EKF Slam:
1. Set up a ROS2 workspace: Create a new ROS2 workspace where you will build and run your ROS2
packages. Open a terminal and execute the following commands:
mkdir -p ~/ros2_ws/src
cd ~/ros2_ws/src
2. Clone the necessary repositories: Clone the required repositories for the 2D SLAM demo and related
packages into your ROS2 workspace's src directory. For example:
3. Build the packages: Navigate to the root of your ROS2 workspace and build the packages using the
following commands:
cd ~/ros2_ws
colcon build --symlink-install
4. Configure the ROS2 environment: Set up the necessary environment variables to run ROS2 by executing
the following command in the terminal:
source ~/ros2_ws/install/setup.bash
5. Prepare the dataset: Obtain the dataset you want to use for 2D SLAM, map building, and localization.
Ensure that the dataset is compatible with the 2D SLAM demo software.
6. Launch the 2D SLAM demo: In a terminal, navigate to the ROS2 workspace's root directory and launch
the 2D SLAM demo using the following command:
7. Play the dataset: In another terminal, play the dataset using the ROS2 `ros2 bag` command. For example:
8. Visualize the results: Use RViz or any other ROS2 visualization tool to observe the 2D SLAM results,
map building, and localization. You can visualize the map, robot trajectory, and estimated pose in real-time.
2. Check Python version: Type the following command to check the version of Python installed on your
device:
python --version
This will display the Python version currently installed. Ensure that the version is compatible with the
Webots and the specific Python package you want to use.
3. Check package availability: Use the following command to check if a specific Python package is
installed:
pip show <package_name>
Replace `<package_name>` with the name of the Python package you want to check. If the package is
installed, it will display information about the package, including the version number. If it is not installed, an
error message will be shown. Like for example :
pip show numpy
50
Also, Running this command will display information about the installed numpy package, including the
version number, location, and other details as you can see :
Name: numpy
Version: 1.21.1
Summary: NumPy is the fundamental package for array computing with Python.
Home-page: https://numpy.org/
Author: Travis E. Oliphant et al.
Author-email: None
License: BSD
Location: /usr/local/lib/python3.8/site-packages
Requires:
Required-by: pandas, matplotlib, ...
4. Verify Webots compatibility: Some Python packages may require additional dependencies or specific
configurations to work with Webots. Check the package documentation or the Webots documentation to
ensure compatibility and any specific instructions for integration.
5. Test package functionality: Once you have confirmed that the package is installed, you can test its
functionality by running a simple Python script. Create a new Python file (e.g., `test_package.py`) and
import the package module. Then, write some code to use the package's functionality. For example:
import <package_name>
6. Run the Python script: Execute the Python script in the terminal using the following command:
python test_package.py
If the package is installed correctly and compatible with Webots, the script should run without any errors.
By following these steps, you can check if a Python package is installed on your device and verify if it can
be used within the Webots environment. Ensure that you have the necessary dependencies and
configurations in place for seamless integration between the Python package and Webots.
51
exploring alternative sensor options that are well-supported within the simulation software may be
necessary.
By addressing these errors and failures through appropriate solutions, such as utilizing alternative
programming languages, employing workarounds for macOS compatibility, exploring compatible
simulation software, and adapting to the transition from real robot to simulation, the project can continue
progressing towards its objectives effectively and learn more how to solve problems.
52
Graph slam mapping result :
53
conclusions regarding the localization accuracy and mapping accuracy of different algorithms:
Fast SLAM: The Fast SLAM algorithm achieves a localization accuracy of 75% and a mapping accuracy of
65%. This algorithm shows relatively good performance in terms of localization but has a lower accuracy in
creating the map.
EKF SLAM: The EKF SLAM algorithm demonstrates an 80% localization accuracy and a 70% mapping
accuracy. It performs slightly better than Fast SLAM in both localization and mapping tasks.
Graph SLAM: Graph SLAM stands out with a higher level of accuracy, achieving a localization accuracy of
90% and a mapping accuracy of 80%. This algorithm provides more accurate estimations for both robot
localization and mapping the environment.
Visual SLAM: Visual SLAM performs well, achieving an 85% localization accuracy and a 75% mapping
accuracy. It utilizes visual information to enhance the accuracy of localization and mapping compared to the
other algorithms.
In summary, the Graph SLAM algorithm exhibits the highest accuracy among the algorithms evaluated,
followed by Visual SLAM and EKF SLAM. Fast SLAM shows relatively lower accuracy in both
localization and mapping tasks compared to the other algorithms.
References
[3] R. B., "Scriba Robot - a printing robot," Arduino Project Hub, 2015. [Online]. Available:
https://create.arduino.cc/projecthub/robinb/scriba-robot-a-printing-robot-
0048fa?ref=search&ref_id=slam%20algorithm&offset=0
[5] N. Baddorf, "Autonomous Home Robot to Help Around the House," Arduino Project Hub,
2016. [Online]. Available: https://create.arduino.cc/projecthub/nbaddorf/autonomous-home-
robot-to-help-around-the-house-
250fff?ref=search&ref_id=Simultaneous%20localization%20and%20mapping&offset=1
[7] Robotino Manual, Festo Didactic GmbH & Co. KG, Denkendorf, Germany.
54
[8] R. Siegwart and I. Nourbakhsh, Introduction to Autonomous Mobile Robots, Cambridge, MA:
MIT Press, 2011.
[10] K. Belda and J. Jirsa, "Control Principles of Autonomous Mobile Robots Used in Cyber-
Physical Factories," in 2018 23rd International Conference on Process Control (PC), Strbske Pleso,
Slovakia, 2018, pp. 1-6.
[12] A comparison of different approaches to solve the SLAM problem on a Formula Student
Driverless race car.
[13] SLAM Algorithm for Omni-Directional Robots based on Artificial Neural Networks and
Extended Kalman Filters.
[14] Design and Simulation of Path Planning Algorithm for Autonomous Mobile Robot
Navigation System Using EKFSLAM.
[17]A non-real-time integration of Webots robot simulator with the ORBSLAM2 library using
ROS2 for environment localization and
mappinghttps://github.com/biorobaw/webots_orb_slam_Su2021
[20]ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras
55