0% found this document useful (0 votes)
20 views56 pages

Final Capstone Project Report

Uploaded by

saifsaqqa26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views56 pages

Final Capstone Project Report

Uploaded by

saifsaqqa26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Capstone

Report

Raobotino and Slam algortim


Supervised by:
Dr. Tarek Tutunji

Esraa Talat 18110087


Table of Contents

INTRODUCTION 2

LITERATURE REVIEW 4

THEORETICAL BACKGROUND 5

SYSTEM DESIGN 23

IMPLEMENTION : 25

EXPERIMENTAL RESULTS AND FINDINGS: CONCLUSIVE OUTCOMES: 52

CONCLUSION AND FUTURE WORK 54

Tables:
Table 1 SLAM Algorithm Kinds _____________________________________________________________________________ 12
Table 2 Visual Slam kind __________________________________________________________________________________ 17
Table 3 comparison of some ROS 2 distributions _______________________________________________________________ 26
Table 4 Robotics simulation software _______________________________________________________________________ 37
Figures:
Figure 1: Mobile robot Festo Robotino ________________________________________________________________________ 6
Figure 2 Top and front view for Robotino______________________________________________________________________ 7
Figure 3 Decomposition of wheel-pitch circumferential velocities. __________________________________________________ 8
Figure 4: SLAM Processing Flow,REF(25) ______________________________________________________________________ 9
Figure 5 System Block Diagram ____________________________________________________________________________ 24
Figure 6 Flow chart ______________________________________________________________________________________ 25
Figure 7 downloaded VirtualBox package ____________________________________________________________________ 30
Figure 8 upload ubuntu __________________________________________________________________________________ 31
Figure 9 create VirtualBox ,start new VirtualBox _______________________________________________________________ 31
Figure 10 create VirtualBox, Add Type ______________________________________________________________________ 31
Figure 11 add Ubuntu in VirtualBox _________________________________________________________________________ 32
Figure 12 Ubuntu within VirtualBox _________________________________________________________________________ 32
Figure 13 Visual Orb slam mapping result ____________________________________________________________________ 52
Figure 14 Fast slam mapping result _________________________________________________________________________ 52
Figure 15 Graph slam mapping result _______________________________________________________________________ 53
Figure 16 EKF slam resuilt_________________________________________________________________________________ 53
Equations:
Equation 1______________________________________________________________________________________________ 9
Equation 3:sum of the probability of all possible values of the random variable must sum up to 1 ________________________ 10
Equation 4 sum of probabilities for all possible values __________________________________________________________ 11
Equation 5 sum of probabilities for all possible values __________________________________________________________ 11
Equation 6 sum of probabilities for all possible values __________________________________________________________ 11
Equation 7 Predict the state at time _________________________________________________________________________ 13
Equation 8 Linearize the system dynamics around the predicted state ______________________________________________ 13
Equation 9 Predict the covariance of the state ________________________________________________________________ 13
Equation 10 Update the state estimate based on the measurement _______________________________________________ 14
Equation 11 Compute the Kalman gain ______________________________________________________________________ 14
Equation 12 Update the state estimate ______________________________________________________________________ 14
Equation 13 Update the covariance of the state _____________________________________________________________ 14
Equation 14 mean landmark error __________________________________________________________________________ 21
Equation 15 Distance Error ________________________________________________________________________________ 21
Equation 16 Angle Error __________________________________________________________________________________ 22

1
Introduction

Problem Definition & Importance

The problem addressed in this capstone project is to compare the accuracy of mapping and localization
results achieved by four different Simultaneous Localization and Mapping (SLAM) methods implemented
on a Robotino autonomous robot. The goal is to evaluate the performance of these SLAM techniques and
identify which method produces the most accurate maps and localization estimates for the Robotino.

The accurate mapping of the environment and precise localization of the Robotino are crucial for its
effective operation as an autonomous robot. By conducting a comparative analysis of the four SLAM
methods, this project aims to determine which technique provides the highest level of accuracy in generating
maps and localizing the robot within the environment.

The importance of this project lies in its potential to improve the overall performance and reliability of the
Robotino's navigation and localization capabilities. By identifying the most accurate SLAM method, the
project outcomes can contribute to enhancing the efficiency and effectiveness of the Robotino in performing
its tasks, such as autonomous package delivery.

Furthermore, the project's findings and insights can have broader implications for the field of autonomous
robotics. The evaluation and comparison of different SLAM techniques can provide valuable knowledge and
guidance for researchers and practitioners working with autonomous robots in diverse applications. This
project can contribute to advancing the understanding of SLAM methods and their suitability for different
robotic systems beyond the Robotino, fostering innovation and improvement in the field.

Project Description

The objective of this project is to compare the accuracy of mapping and localization results achieved by four
different Simultaneous Localization and Mapping (SLAM) methods implemented on a Robotino
autonomous robot. SLAM is a vital technique in robotics that enables a robot to simultaneously create a map
of its environment and determine its own position within that map.
The project will involve implementing and integrating four distinct SLAM algorithms onto the Robotino
using many platforms and software. Each algorithm will be responsible for mapping the environment and
estimating the robot's position based on sensor data, such as laser scans and odometry readings. The four
SLAM methods chosen for evaluation will be selected based on their popularity and performance in the field
of robotics.
To conduct the comparison, a series of experiments will be designed and executed in various environments.
The Robotino will navigate through these environments, and the four SLAM methods will generate maps
and estimate the robot's position. The accuracy of the maps and localization results will be evaluated by
comparing them against ground truth data obtained from external sources, such as manually created maps or
motion capture systems.
Data collected during the experiments will be analyzed to assess the accuracy of each SLAM method in
terms of map quality and localization precision. Various metrics, such as map consistency, feature
alignment, and localization error, will be used to quantify and compare the performance of the different
algorithms. The results will be statistically analyzed to determine significant differences in accuracy
between the SLAM methods.
The project aims to provide insights into which SLAM algorithm performs best in terms of accuracy and
reliability on the Robotino platform. The findings will help identify the most suitable SLAM method for
mapping and localization tasks in similar autonomous robot applications. Additionally, the project will
contribute to the broader field of robotics by adding to the knowledge base of SLAM algorithms and their
performance characteristics, aiding researchers and practitioners in making informed decisions regarding
SLAM implementation.

2
Design Specifications
The specifications for this project are comprehensive and cover a wide range of aspects that are critical for
the development and successful deployment of an autonomous delivery robot. These specifications are
designed to ensure that the final product meets all the requirements and performance criteria and can
perform its mission effectively and efficiently. The specifications cover the following items:

Hardware:
The project will utilize the Robotino platform within the Webots simulation softwares. The Robotino robot
will be equipped with a variety of sensors such as a Distance sensor, position sensor ,Gyro sensor , cameras,
and IMU to perceive its environment and enable navigation. The robot will also have the capability to
establish a network connection within the simulation for communication and localization information.

Software:
The project will implement SLAM algorithms within the Webots environment to enable the robot to create a
map of the simulated environment and accurately determine its position. Additionally, obstacle detection
and avoidance algorithms will be integrated into the robot's control system to ensure safe loclization.
The software design specifications for this project involve implementing four SLAM algorithms within the
Webots environment: EKF SLAM using the 2D SLAM demo from MRPT and graph-based SLAM using
ROS2 and Webots. These algorithms enable the robot to create accurate maps and determine its position.
Path planning algorithms generate collision-free paths, while obstacle detection and avoidance algorithms
ensure safe navigation. Visualization tools like RViz, Webots, and ROS provide real-time feedback. The
objective is to enhance mapping, localization, path planning, and obstacle avoidance capabilities for the
robot's navigation within Webots.

Networking:
The simulation will allow the robot to establish a network connection for communication and localization
purposes. The robot will receive real-time updates on its mission and location through this network
connection. However, the project will also focus on ensuring the robot's ability to operate robustly and
autonomously in the event of a network disconnection or unstable connection. The SLAM algorithm will
continue to function using previously acquired map data, allowing the robot to maintain localization and
navigate to the target location even without a stable network connection.

Testing and Evaluation:


The robot's performance will be evaluated extensively within the Webots simulation environment. The
testing will include various scenarios to assess localization accuracy, navigation efficiency, obstacle
detection, and avoidance rate. Additionally, the project will evaluate the robot's energy efficiency, safety
mechanisms, and user interface within the simulated environment. The testing and evaluation results will
guide improvements and ensure the project meets the specified performance criteria

Timing and Deliverables:


The project will be executed within a specified timeframe, adhering to regular progress reports and
deliverables. The deliverables will include source code, documentation, and detailed test results. The project
timeline will consider the complexity of the project, available resources, and expertise required for
completion.

Safety:
The robot's behavior will be designed to prioritize safety. It should include collision detection and avoidance
mechanisms to mitigate risks to people, property, and the environment. Safety mechanisms will be
thoroughly tested and integrated into the robot's control system to ensure safe operation within the simulated
environment. Safety is ensured by the robot's ability to avoid obstacles.

Human-Robot Interaction:
The project will incorporate a user interface within the Webots simulation software to enable remote control
and monitoring of the robot's status and mission progress. The user interface will be intuitive and user-

3
friendly, providing clear instructions for operation and facilitating interaction with the robot within the
simulated environment.

System Assumptions

This project is based on important system assumptions. The first assumption is that the robot will be
operating in a known and static environment, allowing it to use its sensors and localization capabilities
effectively. The functionality of the robot's sensors and actuators is also assumed to be as expected,
allowing it to gather information about the environment and localization accordingly.

Additionally, the algorithm for obstacle detection and avoidance is assumed to be accurate and reliable.
There are no assumptions of malicious interference, and the robot is expected to follow planned paths, detect
the environment, and build a map for it. The testing environment is assumed to accurately represent the
expected operating conditions for reliable performance evaluations.
Performance Criteria

This project aims to evaluate the performance of the robot in the Webots simulation environment based on
three key criteria. The first criterion is Map Accuracy, which assesses the accuracy of the generated map by
comparing it with ground truth data. The second criterion is Localization Accuracy, which measures how
closely the robot's estimated position aligns with the actual ground truth position. The third criterion is
Obstacle Detection and Avoidance, which evaluates the effectiveness of the robot's algorithms in detecting
and avoiding obstacles to ensure collision-free navigation. By considering these criteria, a thorough
assessment of the robot's mapping accuracy, localization accuracy, and obstacle detection capabilities in the
simulation environment can be conducted.

Literature Review
On one hand, the field of mobile robotics has seen exponential growth in recent years, with
many companies venturing into the development and deployment of autonomous robots for a wide
range of applications. One such company, Iva Systems (now known as Amazon Robotics), utilizes a
fleet of autonomous mobile robots for warehouse fulfillment. These robots navigate through the
warehouse, picking and transporting items to be packaged for shipment. Another company, Nuro,
uses autonomous delivery vehicles for the transportation of groceries and other goods, offering a
convenient and efficient solution for customers who can receive their items without having to leave
their homes. Starship Technologies uses small, sidewalk-navigating robots for local delivery in
densely populated urban areas, offering a low-cost and environmentally friendly solution. Udelv, on
the other hand, utilizes autonomous delivery vans for grocery and other goods delivery, providing
a larger capacity for deliveries. Savioke has created autonomous robots to deliver items within
hotels, improving the guest experience by providing quick and efficient service. NayaTech has
developed drones for last-mile delivery in urban areas, offering a fast and efficient solution for
delivering items directly to customers. Eliport uses autonomous drones for medical sample delivery
in hospitals, providing a safe and reliable way to transport sensitive materials. Boxbot has
implemented autonomous delivery trucks for package delivery, offering a cost-effective solution for
large-scale deliveries. Postmates uses a fleet of autonomous delivery robots for food and goods
delivery, offering a convenient solution for customers. RoboPostman utilizes autonomous robots
for mail and package delivery in residential areas, offering a quick and efficient solution for postal
services. Each of these companies offers unique advantages and disadvantages in terms of cost,
efficiency, and reliability. However, they all demonstrate the potential for mobile robotics to
revolutionize various industries and make our lives easier.

On the other hand, SLAM is a widely studied problem in robotics, with a significant impact
on autonomous navigation. Researchers have proposed various techniques for SLAM, using a range
of sensor types. One example is using laser sensors, as proposed by (Eliazar and Parr, 2003), which
can produce detailed maps but may be affected by shiny or black objects. Another approach is using
sonar sensors, as proposed by (Zunino and Christensen, 2001), which are low-cost and have low
4
computational complexity but lack fine-grained information. Other examples include bio-sonar
(Steckel and Peremans, 2013), which has high intelligent interaction capability but struggles in
complex environments, and vision-based SLAM (Irie et al., 2012), which can acquire more
information but is sensitive to shadows and illumination conditions. Despite the limitations of
individual sensors, some researchers propose using multiple sensors for improved accuracy. SLAM
algorithms are utilized in several robot projects that are currently under development, including
Scriba, Mapbot, Sparki, BOBO, and Autonomous Home Robot. Each project has its unique
components, such as ELP cameras, stepper motors, servomotors, IR or ultrasonic distance sensors,
Matlab, Sparki's onboard servo-mounted ultrasonic distance sensor, Raspberry Pi, Arduino Mega,
Teensy 4.1, LIDAR system, and ROS, to build the robot and run the SLAM algorithm. The
implementation of SLAM varies from project to project, with some using PID control systems for
self-balancing and navigation, while others utilize serial connections to collect environmental data.
Despite the differences, all these projects have one common goal, which is to use the SLAM
algorithm for navigation.

Despite the promising future of mobile robotics, there are still several disadvantages that
need to be addressed. These include the high cost of acquiring and maintaining the robots, the
need for specialized infrastructure, and the limitations of current technology. Currently, the
cost of autonomous delivery robots is relatively high, making it difficult for some companies to
implement them. In addition, the technology required to support autonomous delivery robots is
still developing and the reliability and robustness of these systems are not yet at the level of human
drivers. The regulations and infrastructure for autonomous delivery robots are also not yet fully
developed, presenting additional challenges for companies looking to implement them. This is
where this project comes in. The goal of this project is to utilize SLAM algorithm to overcome the
limitations faced by current autonomous robots in the industry. The SLAM algorithm will allow
the robot to map its surroundings in real time and keep track of its location, even in changing
environments. This will greatly enhance the reliability and robustness of the robot, reducing the
chances of failure and improving the efficiency of its operations. The SLAM algorithm will also
allow the robot to dynamically adjust its trajectory based on its surroundings, ensuring that it
stays on track even in changing environments. Furthermore, the use of SLAM will improve the
safety of the robot, as it will be able to detect and avoid potential hazards in its path. Besides, the
Festo Robotino mobile robot has been used in many projects, each showcasing its unique
capabilities and applications. Some examples of Festo Robotino mobile robot projects include the
development of an autonomous inspection robot, a cooperative multi-robot system, and a delivery
robot. Each project utilizes the Festo Robotino's advanced features, such as its robustness,
versatility, and reliability, to achieve its specific goals. Despite the differences in their applications,
all these projects share the common goal of utilizing Festo Robotino's capabilities to solve real-
world problems and make a positive impact on society. Whether it's through improving the
efficiency of industrial processes, reducing human workload in hazardous environments, or
delivering goods more conveniently and efficiently, the Festo Robotino is an innovative and
valuable tool for researchers and engineers. With its ability to integrate with other technologies
and its open-source software architecture, the Festo Robotino has the potential to continue pushing
the boundaries of what is possible in the field of robotics. Overall, the Festo Robotino is a versatile
and reliable platform that is well-suited to a wide range of applications and has proven itself to be a
valuable tool in the advancement of robotics.

Finally, my project also focuses on the development of robust networking solutions to


ensure the robot can complete its mission even in case of network disconnection. This includes
researching and implementing various communication protocols and technologies, , to ensure that
the robot can maintain connectivity and communication with its control system at all times. This is
particularly important for delivery robots as they must be able to and deliver goods even in areas
with poor or no connectivity.

Theoretical Background

Festo Robotino

5
The Festo Robotino is a highly maneuverable mobile robot designed for use in a variety of
applications as shown in Fig.1. This versatile robot is equipped with three Omni wheels and
independent motors, allowing it to move in any direction with precision and control. Additionally,
the Robotino features a sturdy circular stainless-steel frame and a rubber protection strip with
built-in collision protection sensors. The robot also includes nine infrared distance sensors, two
inductive analog sensors, two digital optical sensors, a camera, and the ability to integrate
additional electrical components via an I/O interface. With its advanced features, the Festo
Robotino is ideal for tasks such as following predefined paths, recognizing and avoiding obstacles,
and transporting payloads.

The control system of Robotino is a sophisticated and highly advanced system that allows for
precise navigation and maneuvering within complex environments. The control system is
comprised of a 32-bit microcontroller, which provides motor control, as well as multiple sensors,
including infrared distance sensors, inductive and optical sensors, and a color camera. This sensor
suite enables the Robotino to perceive its surroundings and navigate with high accuracy.
Additionally, the Robotino features a premium or basic edition embedded PC, depending on the
specific needs of the application, and various I/O interfaces for integrating additional electrical
components. The Robotino's control system is designed to be flexible and adaptable and can be
further developed and customized to meet the specific requirements of each project.

Figure 1: Mobile robot Festo Robotino

Robot kinematics deals with the motion and transformations of robots, and it is a crucial aspect of
the design and control of the Festo Robotino. The Robotino is equipped with three omnidirectional
wheels that provide a high degree of maneuverability, allowing the robot to move in any direction.
The wheels are independently controlled by motors, which enable the Robotino to navigate
complex environments with precision and control. To understand the kinematics of the Festo
Robotino, it is important to consider both the geometry of the robot and the mathematical models
that describe its motion. These models are used to calculate the robot's velocity and acceleration, as
well as its position and orientation in the environment. The control system of the Robotino can use
this information to make real-time decisions and execute actions, making the robot highly
responsive and agile. In conclusion, the kinematics of the Festo Robotino plays a vital role in its
design, control, and operation, enabling the robot to move and manipulate objects in its
environment with ease.

6
Robot sensor and control system
The Festo Robotino is a mobile robot system with a diameter of 450 mm and a height of
290 mm including the controller housing. It has a total weight of approximately 20 kg (without the
mounting tower) and can carry a maximum payload of 30 kg. The robot is equipped with a circular
stainless steel frame that features an omnidirectional drive, allowing it to move in all directions.
The frame also includes a rubber protection strip that has a built-in collision protection sensor.

The robot has nine infrared distance sensors, one inductive sensor, and two optical sensors
that help it to detect its surroundings and avoid obstacles. It also has a color camera with full HD
1080p resolution and USB interface that can be used for visual monitoring and navigation. The
premium edition of the robot comes with a mounting tower that has three mounting platforms,
making it highly versatile and suitable for a wide range of applications.

The Festo Robotino has an embedded PC to COM Express specification and comes in two
editions - the premium edition with an Intel i5 processor, 2.4 GHz, dual-core, 8 GB RAM, and 64
GB SSD, and the basic edition with an Intel Atom processor, 1.8 GHz, dual-core, 4 GB RAM, and
32 GB SSD. It also has WLAN connectivity to specification 802.11g/802.11b as a client or access
point, which makes it easy to communicate with other devices.

The robot has a motor control system with a 32-bit microcontroller and free motor
connection, and it has 2 Ethernet ports, 6 USB 2.0 (HighSpeed) ports, 2 PCI Express slots, and 1
VGA port. It also has a 1x I/O interface that can be used for integrating additional electrical
components. The Festo Robotino is a highly advanced and versatile mobile robot system that can
be used for a wide range of applications. As we can see in figure 2.

Figure 2 Top and front view for Robotino

Kinematic model and some theory

In this section, a comprehensive mathematical analysis of the mobile robot's kinematics and
dynamics is presented. The focus is on the kinematics of the robot body and the dynamics of the DC drives.
The kinematic model of the omnidirectional drive mobile robot can be derived using the following
equation. The angles correspond to the robot drive wheel distribution, with α1 = 60 degrees, α2 = 180
degrees, and α3 = -60 degrees. By separating the coefficients from the expressions, a Jacobian matrix J can
be defined.

7
− sin(α1) cos(α1) 𝑟
𝐽 = [ sin(α2) cos(α2) 𝑟]
− sin(α3) cos(α3) 𝑟
𝜋 𝜋 5𝜋
r is the
where angles correspond to robot drive wheel distribution: α1 = 60° = 3 , α2 = 180° = − 3 , α3 = −60° = 3
wheel’s radius [m],L is the distance between the center of the robot base and the center of the wheel [m].

𝑥
𝑣 = [ 𝑦 ] = 𝑅(θ ). 𝐽−1 . 𝜔
θ

Where

• 𝑣 (velocity vector in the inertial frame. ),


• 𝑥 and 𝑦 are the translational velocities in [m/s] along the corresponding axis and in the inertial
coordinate system.
• θ is the rotational velocity of Robotino in [rad/s]
• R(θ) is the rotation matrix from body coordinates to inertial coordinates
• J is a 3 × 3 matrix containing the constraints provided by the wheels.
• ω is a 3 × 1 vector containing the rotational velocities of each wheel in [rad/s].

By substituting the rotation matrix R(θ) and the constraint matrix J one gets

𝑟 𝑟
− 0
𝑥 cos(θ) − sin(θ) 0 √3 √3 𝜔1
𝑟 2𝑟 𝑟
𝑣 = [ 𝑦 ] = [ sin(θ) cos(θ) 0] . − . 𝑓. [𝜔2]
θ 0 0 1 3 3 3 𝜔3
𝑟 𝑟 𝑟
[ 3𝐿 3𝐿 3𝐿 ]
𝑟1
• f= , is the gear ratio for Robotino
16

The robot is composed of a platform, including an underframe and chassis, with three omnidirectional
wheels. These wheels are positioned at an angle of 120 degrees to each other, and each wheel is powered
independently by one DC motor through a planetary gearbox and a toothed belt. The gearing mechanism is
replaced by a single general belt gear, as shown in figure 3, which decomposes wheel-pitch circumferential
velocities.

Figure 3 Decomposition of wheel-pitch circumferential velocities.

8
SLAM Algorithms

SLAM Algorithms Definition:


Simultaneous Localization and Mapping (SLAM) is a fundamental computational algorithm used in mobile
robotics and autonomous systems to create a map of the environment while simultaneously determining the
robot's position within that map in real-time. By leveraging a combination of sensors such as cameras,
lidars, and odometry data, SLAM algorithms enable robots to gather information about their surroundings
and navigate autonomously.

The core of the SLAM algorithm lies in the Bayesian filter equation:
Equation 1

𝑷(𝒙_𝒌, 𝒎 | 𝒛_𝟏, … , 𝒛_𝒌, 𝒖_𝟏, … , 𝒖_𝒌) = 1/𝑍 ∗ 𝛱(𝑖 = 1 𝑡𝑜 𝑘) 𝑝(𝑧_𝑖 | 𝑥_𝑘, 𝑚, 𝑢_{𝑖 − 1}) ∗ 𝑝(𝑥_𝑘 | 𝑥_{𝑘 − 1}, 𝑢_{𝑘 − 1})

This equation represents the probability of the robot's state (x_k) and the map of the environment (m) given
its sensor measurements (z_1, ..., z_k) and control inputs (u_1, ..., u_k). It consists of two main components:

Product of sensor likelihoods: This part calculates the probability of the robot's sensor measurements given
its state and the map. It takes into account factors such as sensor noise and the correspondence between the
measurements and the map.

Product of motion models: This part calculates the probability of the robot's state given its previous state and
control input. It models the robot's motion dynamics, including uncertainty and constraints.

By applying the Bayesian filter equation, the SLAM algorithm iteratively updates the robot's state and map
based on new sensor measurements. The robot's state is updated using the sensor likelihoods, which refine
the estimate of its position, while the map is updated using the motion models, incorporating new
information about the environment. This recursive nature of the SLAM algorithm allows it to gradually
build a map of the environment while estimating the robot's position, even in the presence of uncertainty. By
integrating sensor measurements and control inputs, SLAM enables the robot to improve its understanding
of the environment over time, making it an essential tool for autonomous systems in various domains.

Figure 4: SLAM Processing Flow,REF(25)

The SLAM algorithm consists of two main components: front-end processing and back-end processing. The
front-end processing component plays a crucial role in accurately processing the raw sensor data collected
by the robot. It involves several steps, including feature extraction, correspondence matching, and data
association. Feature extraction identifies distinctive features in the sensor data that can serve as reference
points for mapping and localization. These features could be corners, edges, or specific patterns in the

9
environment. Correspondence matching matches the extracted features with those stored in the map,
allowing the robot to determine its relative position within the environment. Data association ensures correct
alignment of the robot's current view with previous views, enabling the construction of an accurate and
consistent map over time.

The output of the front-end processing component is then passed to the back-end algorithms for further
processing and mapping. The back-end algorithms use the processed sensor data to estimate the robot's
position and orientation within the environment, updating the map representation accordingly. This iterative
process allows the robot to refine its map as it moves through the environment.

It is essential to note that the accuracy and reliability of the front-end processing component significantly
impact the overall performance of the SLAM system. Careful consideration must be given to the selection of
sensors and processing techniques to ensure optimal results.

SLAM is a critical area of research in robotics and computer vision, enabling robots to autonomously create
maps of their surroundings and navigate effectively. By continuously updating the map and determining its
position within it, a robot can make informed decisions, avoid obstacles, and successfully navigate complex
environments. The development of robust SLAM algorithms is vital for the advancement of autonomous
systems, paving the way for enhanced capabilities in various applications, including robotics, self-driving
cars, and augmented reality.

SLAM Algorithm and Probability Theory:


Probability theory plays a fundamental role in the localization and Simultaneous Localization and Mapping
(SLAM) process. It provides a powerful framework for managing and modeling uncertainty, which is
crucial in SLAM due to the inherent noise and incompleteness of sensor data collected by robots.

In SLAM, the system state is represented by a set of variables, including the robot's position, orientation,
and the locations of features in the environment. These variables are treated as random variables and are
associated with probability distributions. By utilizing probability theory, SLAM algorithms can incorporate
prior knowledge, such as the robot's initial position, and effectively handle uncertainty in sensor
measurements.

Various SLAM algorithms, including particle filters and graph-based methods, leverage probability theory
to model and estimate the system state. Monte Carlo methods, in particular, are often employed to sample
from the posterior distribution and obtain estimates of the system state. This integration of probability theory
enables SLAM algorithms to provide robust and accurate representations of the system state, compensating
for measurement noise and other sources of error.

A key concept in SLAM is the modeling of variables as random variables. These variables can assume
different values based on the principles of probability. For example, the position of a robot can be
represented as a random variable X, and the probability of the robot being at a specific location is denoted as
p(X = x). The sum of probabilities for all possible values of the random variable must equal 1. In discrete
probability functions, this is expressed as:

Equation: 𝑎 𝑟𝑎𝑛𝑑𝑜𝑚 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒

𝒑(𝑿 = 𝒙)
Equation 2:sum of the probability of all possible values of the random variable must sum up to 1

∑ 𝑃(𝑋 = 𝑥) = 1
𝑥

10
The SLAM problem involves creating a map of the environment, denoted as 𝑀={𝑚1,𝑚1,...,𝑚𝑁}, and
recording the robot's movements over time. This is achieved by capturing the robot's state at each time step,
O(k), its observation vector, Z(t), and control signals, U(t). The interval between each sample is defined as
T, the sampling time. The following equations illustrate the relationship between the robot's state at time t
and its movements:

Equation 3 sum of probabilities for all possible values

𝑂̂(𝓉) = 𝑂(𝓉 − 1) + 𝑂̇ (𝓉)𝑇

Equation 4 sum of probabilities for all possible values

𝑦̂(𝓉) = 𝑦(𝓉 − 1) + 𝑦̇ (𝓉)𝑇

Equation 5 sum of probabilities for all possible values

𝜃̂(𝓉) = 𝜃(𝓉 − 1) + 𝜃̇(𝓉)𝑇

In summary, probability theory provides a mathematical foundation for SLAM algorithms, enabling the
representation and manipulation of uncertainty. By incorporating probability distributions and Monte Carlo
methods, SLAM algorithms can model the system state, estimate it accurately, and account for measurement
noise and other sources of uncertainty.

SLAM Algorithm Kinds:


Comparative Analysis of SLAM Methods: Selecting the Optimal SLAM Algorithm:
Algorithm Description Advantages Disadvantages
This is one of the oldest and most
basic SLAM algorithms. It is based - Can be inaccurate in
on the extended Kalman filter, - Easy to implement – high-noise environments -
EKF SLAM
which is a probabilistic filter that Fast - Robust to noise Not as accurate as other
can be used to estimate the state of SLAM algorithms
a system from noisy measurements.
This is a more efficient version of
- Faster than EKF SLAM - - More complex to
EKF SLAM. It uses a Rao-
FastSLAM More accurate in high- implement - Not as robust
Blackwellized particle filter to
noise environments to noise as EKF SLAM
estimate the state of the system.
This is a more general SLAM
algorithm that can be used to
estimate the pose of a robot in an
- More complex to
environment with multiple - Can handle complex
implement - Can be less
GraphSLAM landmarks. It uses a graph to environments - Can
accurate than other SLAM
represent the environment, and the handle multiple robots
algorithms
robot's pose is estimated by finding
the best fit for the graph to the
sensor data.
This is a visual SLAM algorithm - Not as accurate as other
that uses a bag-of-features approach - Fast - Easy to implement visual SLAM algorithms -
ORB-SLAM
to represent the environment. It is - Robust to noise Not as good at handling
able to track the robot's pose in real complex environments

11
time, and it can also be used to
create a map of the environment.
This is a visual SLAM algorithm
that uses a lidar sensor to represent - More expensive than
- Very accurate - Can
the environment. Lidar sensors can other SLAM algorithms -
Lidar SLAM handle complex
provide accurate measurements of Not as fast as other SLAM
environments
distance, which makes them well- algorithms
suited for SLAM applications.
This is a probabilistic SLAM
- More complex to
algorithm that uses a Monte Carlo - Very accurate - Can
Monte Carlo implement - Can be slower
approach to estimate the pose of the handle complex
SLAM than other SLAM
robot and the map of the environments
algorithms
environment.
This is a non-probabilistic SLAM
- Not as accurate as other
algorithm that uses the ICP
Iterative Closest SLAM algorithms - Not as
algorithm to estimate the pose of - Fast - Easy to implement
Point (ICP) good at handling complex
the robot and the map of the
environments
environment.
This is a non-probabilistic SLAM
- More complex to
algorithm that uses the bundle - Very accurate - Can
Bundle implement - Can be slower
adjustment algorithm to estimate handle complex
Adjustment than other SLAM
the pose of the robot and the map environments
algorithms
of the environment.
This is a hybrid SLAM algorithm - More complex to
Graph-based - Very accurate - Can
that combines the strengths of implement - Can be slower
Monte Carlo handle complex
graph-based SLAM and Monte than other SLAM
SLAM environments
Carlo SLAM. algorithms
Visual SLAM (Simultaneous
-Cost-effective and widely - Sensitivity to feature
Localization and Mapping) is a
available sensors. visibility and occlusions.
technique that uses visual sensors,
- Rich environmental -Computational demands
such as cameras, to construct or
information for mapping and processing power
update a map of an unknown
and localization. requirements.
environment while simultaneously
Visual SLAM - Robustness to lighting - Dependence on accurate
estimating the pose of a robot
changes. camera calibration.
within that environment. It relies
- Capable of large-scale - Limited performance in
on visual features, such as keypoints
mapping. low-texture environments.
or landmarks, to track the robot's
-Non-invasive and - Accumulation of drift
motion and determine its position
contactless. over time.
and orientation.
Table 1 SLAM Algorithm Kinds

SLAM Algorithm and Extended Kalman Filter (EKF):


The EKF is a filtering algorithm commonly used for state estimation in systems where the underlying
dynamics can be described by non-linear models. It combines predictions from a motion model with
measurements from sensors to estimate the state of a system. The EKF assumes that the system's state and
measurement models are differentiable and can be linearized around the current estimate. It is widely used in
various applications, including robotics, navigation, and control, to estimate the state of a system with
uncertain measurements and dynamic models.

Once the Simultaneous Localization and Mapping (SLAM) algorithm has been executed to construct or
update a map of the environment and estimate the robot's pose, the accuracy of the SLAM-based system can
be further improved by applying the Extended Kalman Filter (EKF). After completing the SLAM process,
the EKF can be employed as a post-processing step to refine the estimated robot pose and map. The EKF

12
operates by fusing additional sensor measurements and incorporating them into the belief state estimation
process. The primary benefit of using the EKF after SLAM is its ability to handle non-linearities and
uncertainties in the system's dynamics and measurements. The SLAM algorithm, while capable of
producing reasonably accurate results, may still exhibit some level of error and uncertainty. The EKF can
help mitigate these issues and enhance the accuracy of the estimated robot pose and map.To apply the EKF
after SLAM, the current estimated state from the SLAM algorithm serves as the initial belief state for the
EKF. The EKF then incorporates subsequent sensor measurements, such as additional range or bearing
measurements, to update and refine the belief state estimate. The EKF uses its motion and measurement
models, along with the sensor data, to iteratively adjust the state estimate and reduce the effects of noise and
uncertainties.By incorporating the EKF after SLAM, the system can benefit from the EKF's ability to handle
non-linearities and model uncertainties, leading to improved accuracy and reliability. The EKF's iterative
estimation and correction process can further enhance the localization accuracy of the robot and the quality
of the constructed map. The effectiveness of applying the EKF after SLAM depends on various factors,
including the specific characteristics of the environment, the quality and type of sensor measurements
available, and the accuracy of the initial SLAM estimate. Additionally, the selection of appropriate motion
and measurement models for the EKF plays a crucial role in achieving optimal results.

In summary, integrating the Extended Kalman Filter (EKF) as a post-processing step after executing the
SLAM algorithm can help enhance the accuracy and reliability of the estimated robot pose and map. By
utilizing the EKF's capabilities in handling non-linearities and uncertainties, the system can achieve
improved localization accuracy and better map quality, leading to enhanced performance in various robotics
applications.

With past data and correcting the prediction based on the new measurement. By combining the
predicted state and the measured state, the EKF provides a more accurate estimate of the robot’s location,
which is essential for navigation and localization in the SLAM algorithm. The EKF helps the robot
overcome any errors in its measurement by updating its estimates of the system state at each time step,
leading to a more reliable representation of the environment and the robot's path over time. The equations in
the EKF algorithm describe a process for estimating the state of a non-linear system, such as a robot, given
some measurements and control inputs. The algorithm starts by initializing the state estimate and its
covariance (X(0) and P(0)) and then goes through several steps to refine the estimate at each time step. The
steps are:
o Predict the state at time 𝓉:
𝑋(𝓉|𝓉 − 1) = 𝑓(𝑋(𝓉 − 1), 𝑈(𝓉 − 1))
Equation 6 Predict the state at time

Here, 𝑓(𝑋(𝓉 − 1), 𝑈(𝓉 − 1)) is a function that predicts the state of the system based on the previous
state X(𝓉 -1) and control inputs U(𝓉 -1).

o Linearize the system dynamics around the predicted state:


𝐹(𝓉) = 𝜕𝑓/𝜕𝑋(𝓉 | 𝓉 − 1)
Equation 7 Linearize the system dynamics around the predicted state

This step calculates the Jacobian matrix F(𝓉) which describes the linear approximation of
the non-linear system dynamics at the predicted state X(𝓉 | 𝓉 -1).

o Predict the covariance of the state:


𝑃(𝓉 | 𝓉 − 1) = 𝐹(𝓉)𝑃(𝓉 − 1| 𝓉 − 1)𝐹(𝓉)^𝑇 + 𝑄(𝓉)
Equation 8 Predict the covariance of the state

13
where Q(𝓉) is the process noise covariance. This step predicts the uncertainty in the state
estimate by propagating the covariance of the previous state estimate through the linearized
system dynamics.

o Update the state estimate based on the measurement:


𝑍(𝓉) = ℎ(𝑋(𝓉 | 𝓉 − 1)) + 𝑅(𝓉)
Equation 9 Update the state estimate based on the measurement

where R(𝓉) is the measurement noise covariance. Here, h(X(𝓉 | 𝓉 -1)) is a function that
predicts the measurement based on the state estimate and R(𝓉) is the uncertainty in the
measurement.

o Compute the Kalman gain:


𝐾(𝑡) = 𝑃(𝑡 | 𝑡 − 1)𝐻(𝑡)𝑇 [𝐻(𝑡)𝑃(𝑡 | 𝑡 − 1)𝐻(𝑡)^𝑇 + 𝑅(𝑡)]−1
Equation 10 Compute the Kalman gain

The Kalman gain is a factor that determines how much the measurement should adjust the
state estimate. It depends on the uncertainty in the state estimate (P(𝓉 | 𝓉 -1)), the
measurement prediction (h(X(𝓉 | 𝓉 -1))), and the measurement noise (R(𝓉)).

o Update the state estimate:


𝑋(𝓉) = 𝑋(𝓉 | 𝓉 − 1) + 𝐾(𝓉)(𝑍(𝓉) − ℎ(𝑋(𝓉 | 𝓉 − 1)))
Equation 11 Update the state estimate

This step updates the state estimate by combining the predicted state and the
measurement.

o Update the covariance of the state:


𝑃(𝓉) = (𝐼 − 𝓉 (𝓉)𝐻(𝓉))𝑃(𝓉 | 𝓉 − 1)
Equation 12 Update the covariance of the state

This step updates the covariance of the state estimate by taking into account the measurement and the
Kalman gain. 𝓉 (𝓉), is a weight that reflects the confidence in the measurement, Z(𝓉), compared to the
prediction, X(𝓉 | 𝓉 -1). The Kalman gain is used to correct the state estimate by taking into account the
measurement. In other words, the state estimate is updated by a weighted combination of the prediction and
the measurement, where the weight is given by the Kalman gain. If the measurement is highly confident, the
Kalman gain will be high and the prediction will be corrected significantly by the measurement. If the
measurement is not confident, the Kalman gain will be low and the prediction will not be corrected much by
the measurement. where Kalman gain is a mathematical term used in control theory. It represents the
weighting factor between the predicted state and the measured state. It is used odetermine how much of each
estimate should be used to update the state estimate. The Kalman gain is computed based on the covariance
matrices of the predicted and measured states and the measurementnoise. The EKF algorithm repeats these
steps for each time step to continuously refine the state estimate and its covariance.

In this project, we will utilize Visual SLAM, GraphSLAM, and ORB SLAM algorithms to compare their
performance for localization and navigation in a delivery robot. The GraphSLAM algorithm will be
employed to construct a map of the environment and estimate the robot's position, while the Extended
Kalman Filter (EKF) will refine and enhance the accuracy of the position estimate. The robot will operate
autonomously, navigating from its initial position to a target location while avoiding obstacles and static
objects. When a network connection is available, the robot will utilize it for communication and receive
additional localization information. However, in the absence of a stable network connection, the robot will
rely on the preexisting map data and the SLAM algorithms to determine its position and plan a path to the
target. The robot's motion control capabilities will be utilized to adapt its path based on new sensor data. The
primary focus of this project will be on developing reliable networking solutions to ensure the robustness of
the robot's navigation and delivery mission, even in scenarios involving network disconnections.

14
Graph slam :
What is graph SLAM?
Graph SLAM is a type of SLAM algorithm that represents the environment as a graph. The nodes of the
graph represent the robot's poses, and the edges of the graph represent the spatial constraints between the
poses. These constraints naturally arise from odometry measurements and from feature observations or scan
matching.

How does graph SLAM work?


The SLAM front-end interprets the sensor data to extract the spatial constraints. The SLAM back-end
typically applies optimization techniques to estimate the configuration of the nodes that best matches the
spatial constraints. The optimization problem can be formulated as a maximum likelihood estimation
problem, or as a least squares problem.
simplified pseudo code representation of how Graph SLAM works:

Initialize an empty graph

for each time step t in the sensor measurements:


Predict robot's motion using odometry readings
Update robot's pose in the graph

for each observed landmark in the sensor measurements:


If landmark is not in the graph:
Add landmark node to the graph
Compute the measurement model for the landmark
Compute the measurement error between the predicted and observed landmark

Add a constraint (edge) between the robot pose and the landmark in the graph
Assign the measurement error to the constraint

Optimize the graph to estimate the robot's trajectory and landmark positions
Apply a graph optimization algorithm (e.g., Gauss-Newton, Levenberg-Marquardt)

end for

Extract the optimized trajectory and landmark positions from the graph

In this pseudo code, the graph represents the structure that holds the robot's trajectory and the positions of
observed landmarks. At each time step, the robot's motion is predicted using odometry readings, and the
robot's pose is updated in the graph. For each observed landmark, the measurement model is computed
based on the sensor readings, and the measurement error is calculated by comparing the predicted and
observed landmark positions. The landmark node is added to the graph if it doesn't exist, and a constraint
(edge) is added between the robot pose and the landmark in the graph. After processing all sensor
measurements, the graph is optimized using a graph optimization algorithm to minimize the measurement
errors and obtain the best estimate of the robot's trajectory and landmark positions. Finally, the optimized
trajectory and landmark positions can be extracted from the graph for further analysis or visualization.

What are the advantages of graph SLAM?


Graph SLAM has several advantages over other SLAM algorithms. First, it is a very flexible framework that
can be used to represent a wide variety of environments. Second, it is relatively robust to noise and errors.
Third, it can be used to track the robot's pose and map the environment simultaneously.

15
What are the disadvantages of graph SLAM?
Graph SLAM also has some disadvantages. First, it can be computationally expensive to solve the
optimization problem. Second, the graph can become very large and complex, which can make it difficult to
maintain and update.

What are some of the challenges in graph SLAM?


Some of the challenges in graph SLAM include:

Data association: The SLAM algorithm must be able to associate sensor measurements with the correct
poses in the graph. This can be difficult, especially in cluttered environments.

Loop closure: The SLAM algorithm must be able to detect and handle loop closures. Loop closures occur
when the robot revisits a location that it has already been to.

Graph management: The SLAM algorithm must be able to manage the graph effectively. This includes
adding new nodes and edges to the graph, and removing old nodes and edges from the graph.

Visual Slam:
Visual SLAM is a type of SLAM algorithm that utilizes visual information, obtained from cameras, depth
sensors, or other image and depth data capturing devices, to track the robot's pose and map the environment.
It consists of a front-end component that extracts and tracks features from the visual data, and a back-end
component that estimates the robot's pose and creates the map using the tracked features. Visual SLAM
offers several advantages, including cost-effectiveness, applicability to various environments, and the ability
to simultaneously track pose and map the environment. However, it also has drawbacks, such as increased
sensitivity to noise and errors, challenges in feature tracking in cluttered environments, and potential
computational complexity when dealing with a large number of features.

Comparison between popular visual SLAM algorithms to choose suitable one for the project:

Type Sensor Accuracy Complexity computationally Applications Advantage Disadvantag


expensive e
Less accurate
Simple, than stereo
Less Low-texture lightweight, SLAM, can be
Monocular Single Less
Less complex computationally environments, can be used more
SLAM camera accurate
expensive simple tasks in low-texture sensitive to
environments noise
Stereo SLAM More
More complex and
accurate than computationa
More lly expensive
More monocular
More More challenging than
Two computationally SLAM, can be
accurate complex environments, monocular
cameras expensive used in more
complex tasks SLAM
challenging
environments

ORB-SLAM Not as
versatile as
Indoor and
Accurate, some other
Moderately outdoor
Single Accurate, Moderately robust, able SLAM
computationally environments,
camera robust complex to work in methods, can
expensive autonomous
real time be more
navigation
difficult to
use

16
RGB-D SLAM More
More complex and
Camera Indoor and accurate than computationa
More lly expensive
with RGB More More outdoor monocular
computationally than
and depth accurate complex environments, SLAM, can
expensive monocular
sensors complex tasks build more
detailed maps SLAM

Most complex
Most and
Visual-Inertial Combines the
challenging computationa
SLAM Most strengths of
IMU + Most environments, lly expensive
Most complex computationally visual SLAM
camera accurate highest level of type of SLAM
expensive and IMU-
accuracy and
based SLAM
robustness

IMU-based Applications More


SLAM where the expensive
Can track the than visual
More robot needs to
Moderately robot's pose SLAM alone,
accurate Moderately be able to track
IMU + computationally even when can be more
than visual complex its pose even
camera expensive the camera is sensitive to
SLAM alone when the
not visible noise
camera is not
visible
Table 2 Visual Slam kind

-ORB visual slam:


ORB-SLAM is a versatile visual SLAM algorithm introduced in 2015 by Raúl Mur-Artal and Juan D.
Tardós. It utilizes Oriented FAST and Rotated BRIEF (ORB) features to track the robot's pose and create a
map of the environment in real-time. ORB-SLAM has demonstrated its effectiveness in various settings,
both indoor and outdoor, making it applicable to robotics, augmented reality, and autonomous driving
applications. Key attributes of ORB-SLAM include its real-time performance, robustness to noise and errors
in visual data, accuracy across diverse environments, flexibility to work with different sensors, such as
cameras, depth sensors, and inertial measurement units, and its open-source nature, allowing for free usage
and modifications. Overall, ORB-SLAM is a powerful and adaptable visual SLAM solution that can
enhance mapping and localization tasks in a wide range of applications.

How does ORB visual SLAM work?


Simplified pseudo code representation of how ORB visual slam works:
Initialize empty map

for each video frame f:


Detect ORB features in f
Compute descriptors for the detected features
If it's the first frame:
Initialize the map by adding keyframe with the detected features and descriptors
Set the initial camera pose

else:
Match the current frame's descriptors with the previous frame's descriptors
Perform feature matching and filtering (e.g., using RANSAC)

if enough matches are found:


Estimate the camera pose using the matched features (e.g., using PnP algorithm)
Perform bundle adjustment to refine camera poses and 3D points

17
if loop closure is detected:
Perform loop closure detection and correction
Optimize the map by adjusting the camera poses and 3D points

if keyframe selection criteria are met:


Add current frame as a new keyframe to the map
Perform keyframe culling to remove redundant keyframes and optimize the map

Update the current frame as the previous frame for the next iteration

end for

In this pseudo code, ORB Visual SLAM utilizes the ORB (Oriented FAST and Rotated BRIEF) features for
feature detection and description. It processes a sequence of video frames and builds a map of the
environment while estimating the camera poses. For each frame, ORB features are detected and descriptors
are computed. If it's the first frame, the map is initialized with a keyframe containing the detected features
and descriptors, and the initial camera pose is set. For subsequent frames, feature matching is performed
between the current frame's descriptors and the previous frame's descriptors. If enough matches are found,
the camera pose is estimated using the matched features, and bundle adjustment is performed to refine the
camera poses and 3D points. If loop closure is detected, loop closure detection and correction are performed
to handle revisited areas. The map is optimized by adjusting the camera poses and 3D points to improve
consistency. Keyframe selection criteria are applied to determine when to add a new keyframe to the map.
Redundant keyframes are culled to optimize the map and improve efficiency. The process continues until all
frames are processed, resulting in a map representation and estimated camera poses.

Fast Slam:
FastSLAM is a probabilistic SLAM algorithm that combines Monte Carlo localization (MCL) with a Rao-
Blackwellized particle filter. It maintains a belief distribution over the robot's pose and the environment map
using a set of particles, where each particle represents a hypothesized pose and map. The algorithm updates
the particles through a motion update step, where the particles are updated based on the robot's motion
model, and a measurement update step, where the particles are updated based on sensor measurements.
FastSLAM offers advantages such as efficiency, robustness to sensor noise, and the ability to
simultaneously track pose and map. However, it can have computational challenges during initialization,
sensitivity to the number of particles chosen, and difficulties in tracking pose in highly dynamic
environments.

How does Fast SLAM work?


Simplified pseudo code representation of how Fast SLAM works:
Initialize particles with random poses and initial weights

for each time step t in the sensor measurements:


Predict the robot's motion using odometry readings
Update particle poses based on motion model

for each particle:


For each observed landmark in the sensor measurements:
Compute the measurement model for the landmark
Compute the likelihood of the landmark measurement given the particle's pose
Update the particle's weight based on the measurement likelihood

Resample particles based on their weights

18
Estimate the robot's pose using the weighted average of the particle poses

for each observed landmark in the sensor measurements:


Update the landmark's position in the map based on the highest-weighted particle

end for

Extract the final map from the particles with their associated weights

In this pseudo code, FastSLAM uses a set of particles to represent possible robot poses and their associated
maps. At each time step, the robot's motion is predicted using odometry readings, and the particle poses are
updated based on a motion model. For each particle, the algorithm processes the observed landmarks. The
measurement model is computed, and the likelihood of the landmark measurement given the particle's pose
is evaluated. The particle's weight is updated based on the measurement likelihood. After updating the
particle weights, a resampling step is performed to select new particles for the next iteration. The probability
of selection is proportional to the particle's weight. The robot's pose is estimated by computing the weighted
average of the particle poses, providing an estimate of the robot's position. Finally, the map is updated by
associating each observed landmark with the highest-weighted particle and updating its position in the map.

EKF slam:

EKF SLAM is a SLAM algorithm that utilizes an extended Kalman filter (EKF) to estimate the robot's pose
and map the environment. The EKF is a recursive filter that can handle noisy sensor measurements and
estimate the state of a dynamic system. In EKF SLAM, a belief distribution represented by a Gaussian
distribution is maintained, and it is updated in two steps. The motion update incorporates the robot's motion
model into the distribution, while the measurement update incorporates sensor measurements. EKF SLAM
offers advantages such as real-time efficiency, relative ease of implementation, and simultaneous tracking of
pose and mapping.

How does EKF SLAM work?


Simplified pseudo code representation of how EKF SLAM works:
Initialize empty map and robot's initial pose

for each time step t in the sensor measurements:


Predict the robot's motion using odometry readings
Update the robot's pose based on the motion model

for each observed landmark in the sensor measurements:


if the landmark is new:
Add the landmark to the map with an initial estimate

else:
Retrieve the landmark's previous estimate from the map

Compute the expected measurement based on the current robot pose and landmark estimate
Compute the measurement Jacobian matrix

Update the landmark's estimate using the Extended Kalman Filter equations:
- Compute the Kalman gain
- Compute the measurement residual
- Update the landmark's estimate based on the Kalman gain and measurement residual
19
Update the robot's pose and covariance using the Extended Kalman Filter equations:
- Compute the motion Jacobian matrix
- Compute the motion residual
- Update the robot's pose and covariance based on the motion Jacobian and residual

end for

In this pseudo code, EKF SLAM estimates the robot's pose and landmark positions in an environment using
an Extended Kalman Filter. At each time step, the robot's motion is predicted using odometry readings, and
the robot's pose is updated based on a motion model. For each observed landmark in the sensor
measurements, the algorithm checks if the landmark is new or already in the map. If it's a new landmark, it
is added to the map with an initial estimate. The expected measurement is computed based on the current
robot pose and landmark estimate, and the measurement Jacobian matrix is computed. The landmark's
estimate is updated using the Extended Kalman Filter equations, which involve computing the Kalman gain,
measurement residual, and updating the estimate based on the gain and residual. Similarly, the robot's pose
and covariance are updated using the Extended Kalman Filter equations, considering the motion Jacobian
matrix and motion residual. The process continues until all sensor measurements are processed, resulting in
an estimated map of landmarks and the robot's trajectory.

LandMarkes:
In EKF SLAM (Extended Kalman Filter SLAM), landmarks are distinctive features or points of interest in
the environment that the robot can observe and use to estimate its own position and orientation. These
landmarks can be objects, landmarks, corners, edges, or key points that have spatial coordinates (e.g., x, y)
in a global or map frame of reference. The EKF SLAM algorithm aims to estimate the positions of these
landmarks and the robot's pose by iteratively incorporating sensor measurements and motion updates.
During the SLAM process, the robot's sensors, such as cameras, lasers, or range finders, detect and provide
measurements of the observed landmarks in the environment. These measurements, combined with the
robot's motion information, are used to update the estimates of both the robot's pose and the landmark
positions. By continuously updating these estimates, EKF SLAM builds an accurate map of the environment
while simultaneously localizing the robot within that map.

Landmarks play a crucial role in SLAM algorithms as they provide essential information for mapping and
localization. They serve as reference points for position estimation, enabling the robot to improve its own
localization by detecting and estimating distances and bearings to these landmarks. Landmarks also facilitate
data association, helping the robot match observed features with known landmarks to determine
measurement correspondences. Moreover, landmarks contribute to map creation by representing the spatial
layout of the environment. The map built using landmark positions can be utilized for navigation, path
planning, and interaction with the surroundings. Landmarks also provide redundancy, ensuring robustness in
the face of sensor noise or temporary unavailability of measurements.

Furthermore, landmarks aid in loop closure detection, which occurs when a robot revisits a previously
visited location. By recognizing landmarks that have been observed before, the robot can close the loop and
improve the consistency of the map. Loop closure helps correct accumulated errors in position estimation
and map representation. Overall, landmarks are vital in SLAM algorithms, offering reliable reference points,
aiding in data association, contributing to map creation, increasing system robustness, and enabling loop
closure detection. They enable accurate localization and mapping in various robotic applications, including
navigation, exploration, and mapping.

Localization and mapping :


Map accuracy refers to the level of precision and correctness in the created map of the environment by a
SLAM algorithm. In SLAM, the goal is to generate a map that accurately represents the structure and
features of the surroundings. Map accuracy is crucial as it directly impacts the performance and reliability of
20
robotic systems relying on the map for navigation and localization. To assess map accuracy, a common
approach is to compare the generated map with ground truth data or a reference map. Various metrics can be
employed, such as point-to-point or point-to-line distances, alignment errors, or overlapping ratios. These
metrics evaluate how closely the generated map matches the true environment. Factors affecting map
accuracy include sensor noise, calibration errors, drift in pose estimation, and the complexity of the
environment. Sensor noise can introduce uncertainty in the measurements, leading to inaccuracies in the
mapped features. Calibration errors can misalign the sensor data, affecting the overall map quality. Pose
estimation drift, where accumulated errors cause the robot's estimated position to deviate from the true
position, can also impact map accuracy. Improving map accuracy often involves refining sensor calibration,
employing robust feature extraction algorithms, utilizing accurate motion models, and incorporating sensor
fusion techniques. Furthermore, incorporating loop closure detection and optimization methods can aid in
reducing accumulated errors and enhancing map consistency.Accurate maps are vital for applications such
as autonomous navigation, robot localization, and environmental monitoring. High map accuracy enables
robots to plan optimal paths, avoid obstacles, and effectively operate in complex and dynamic environments.
Therefore, assessing and enhancing map accuracy is crucial to ensure reliable and efficient robotic systems.

The mapping accuracy in SLAM can be evaluated using different metrics depending on the specific
application and requirements. One commonly used metric is the mean landmark error, which measures the
average difference between the estimated positions of landmarks in the map and their ground truth positions.

The mean landmark error can be calculated using the following equation:

𝑀𝑒𝑎𝑛 𝐿𝑎𝑛𝑑𝑚𝑎𝑟𝑘 𝐸𝑟𝑟𝑜𝑟 = (1/𝑁) ∗ ∑(|𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑑_𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛 − 𝑔𝑟𝑜𝑢𝑛𝑑_𝑡𝑟𝑢𝑡ℎ_𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛|)


Equation 13 mean landmark error

where:
- N is the total number of landmarks in the map.
- estimated_position represents the estimated position of a landmark in the map.
- ground_truth_position represents the known or ground truth position of the same landmark.

In this equation, the absolute difference between the estimated position and the ground truth position is
calculated for each landmark, and then averaged over all landmarks to obtain the mean landmark error.
This metric provides a measure of how accurately the SLAM system is able to estimate the positions of
landmarks in the environment. A lower mean landmark error indicates a higher mapping accuracy, meaning
the estimated map aligns closely with the ground truth map.It's important to note that there may be other
metrics used to evaluate mapping accuracy in different SLAM systems, depending on the specific
requirements or constraints of the application.

Localization accuracy in SLAM can be evaluated using various metrics, depending on the specific
requirements and characteristics of the system. One commonly used metric is the distance error, which
measures the difference between the estimated position of the robot and its ground truth position.

The distance error can be calculated using the following equation:

𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒 𝐸𝑟𝑟𝑜𝑟 = |𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑑𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛 − 𝑔𝑟𝑜𝑢𝑛𝑑𝑡𝑟𝑢𝑡ℎ𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛 |


Equation 14 Distance Error

where:
- estimated_position represents the estimated position of the robot.
- ground_truth_position represents the known or ground truth position of the robot.

In this equation, the absolute difference between the estimated position and the ground truth position is
calculated to quantify the localization error.
21
Another commonly used metric is the angle error, which measures the angular difference between the
estimated orientation of the robot and its ground truth orientation. The angle error can be calculated as:

𝐴𝑛𝑔𝑙𝑒 𝐸𝑟𝑟𝑜𝑟 = |𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑑_𝑜𝑟𝑖𝑒𝑛𝑡𝑎𝑡𝑖𝑜𝑛 − 𝑔𝑟𝑜𝑢𝑛𝑑_𝑡𝑟𝑢𝑡ℎ_𝑜𝑟𝑖𝑒𝑛𝑡𝑎𝑡𝑖𝑜𝑛|


Equation 15 Angle Error

where:
- estimated_orientation represents the estimated orientation of the robot.
- ground_truth_orientation represents the known or ground truth orientation of the robot.

Similar to the distance error, the absolute difference between the estimated orientation and the ground truth
orientation is calculated to assess the angular localization error.These metrics provide a measure of how
accurately the SLAM system can estimate the position and orientation of the robot. Lower distance and
angle errors indicate higher localization accuracy, meaning the estimated pose aligns closely with the ground
truth pose. It's worth mentioning that different SLAM systems may employ additional or alternative metrics
to evaluate localization accuracy, based on the specific requirements and constraints of the application.
Ros2 humble :
ROS 2 (Robot Operating System 2) is an advanced software framework consisting of libraries and tools
specifically developed for constructing robot applications. As the successor to ROS 1, ROS 2 has been
designed to offer improved scalability, reliability, and security. Among the various releases of ROS 2, the
eighth version, known as ROS 2 Humble, was launched in May 2022, bringing with it a host of new features
and enhancements.
ROS 2 Humble boasts several notable additions. One prominent addition is the introduction of a new
middleware called FastRTPS, which is a high-performance middleware specially tailored for real-time
applications. Additionally, ROS 2 Humble places a strong emphasis on security, incorporating
enhancements such as support for encryption and authentication. To further enhance usability, ROS 2
Humble includes a range of new tools, including an intuitive graphical user interface (GUI) for efficient
management of ROS 2 projects. Furthermore, the documentation for ROS 2 Humble has been significantly
improved, providing users with comprehensive and accessible resources.
With its focus on reliability, security, and user-friendliness, ROS 2 Humble represents a significant
milestone in the evolution of ROS 2. It serves as an ideal choice for developers seeking a powerful and
scalable middleware to facilitate the development of robotics applications.

RVIZ
Rviz2 is a port of Rviz to ROS 2. It provides a graphical interface for users to view their robot, sensor data,
maps, and more. It is installed by default with ROS 2 and requires a desktop version of Ubuntu to use.

Webots:
Webots is a versatile and free 3D robot simulator that finds application in industry, education, and research.
With its extensive capabilities, Webots allows users to simulate robot behavior in virtual environments.
Equipped with a vast library of robots, sensors, and actuators, Webots employs a physics engine that
faithfully reproduces the real-world characteristics of these components. This empowers users to create
highly realistic simulations of robots and their corresponding environments.
What sets Webots apart is its user-friendly programming features. Users can conveniently program robots
using various languages such as C, C++, Python, Java, MATLAB, and ROS. The inclusion of a graphical
user interface further simplifies the creation and modification of robot programs.Key features of Webots
include its open-source nature, cross-platform compatibility (Windows, macOS, and Linux), extensive robot
library, physics engine for realistic simulations, support for multiple programming languages, and an
intuitive graphical user interface. Webots serves as an invaluable tool for numerous individuals and groups.
Robot developers can leverage it to simulate robot behavior before physical implementation, thereby
identifying and rectifying potential issues in their designs. Researchers benefit from Webots' capability to
study robot behavior in diverse environments, enabling the development of novel algorithms and control
22
techniques. Lastly, educators can utilize Webots to impart robotics knowledge, allowing students to grasp
fundamental principles of robot control and enhance their programming skills.

2D SLAM MRPPT
2D SLAM (Simultaneous Localization and Mapping) with MRPPT (Maximum Likelihood Relative Pose
Transform) is an approach used to estimate the position and map of a robot in a 2D environment using
sensor data. MRPPT is a technique commonly employed in SLAM algorithms to determine the robot's
relative pose (movement) between consecutive time steps.

In 2D SLAM with MRPPT, the robot utilizes various sensors, such as laser range finders or cameras, to
gather data about its surroundings. The sensor data is processed to extract relevant features and landmarks in
the environment, such as walls or objects. The robot then estimates its position and orientation (localization)
using the acquired sensor measurements and previously constructed map.

MRPPT plays a crucial role in this process by estimating the robot's relative pose between consecutive time
steps based on the observed sensor data. It utilizes probabilistic techniques, such as maximum likelihood
estimation, to determine the most likely transformation of the robot's pose between two time steps. By
iteratively applying MRPPT, the robot can incrementally build a map of the environment while
simultaneously updating its localization estimates.

The combination of 2D SLAM and MRPPT enables a robot to autonomously explore and navigate an
unknown 2D environment while simultaneously building a map of its surroundings. This technology finds
applications in various fields, including robotics research, autonomous navigation, and mapping for tasks
such as localization, path planning, and obstacle avoidance.

System Design
System Requirements

In the webots simulation supportid by other platforms, the Festo mobile robot (Robotino) will utilize the
SLAM algorithm to enhance its mapping and localization capabilities. This combination of algorithms
allows the robot to accurately map its environment and estimate its position within the map. Equipped with
sensors such as cameras, and IMU, the robot creates a 3D representation of the surroundings and employs
the maps for obstacle avoidance and efficient movement. Actuators like wheels, motors, and controllers
enable the robot to navigate in any direction and rotate as needed. The robot's networking capability ensures
connectivity for communication and localization, enabling it to receive instructions and updates. In the event
of network disruption, the robot relies on stored map data and SLAM algorithms to determine its position.
The simulation focuses on robust networking solutions and implements a reconnection protocol for
uninterrupted missions. A user interface accessible through the Festo web platform enables remote control,
monitoring, and configuration. Safety measures, including obstacle detection sensors and emergency stop
mechanisms, prioritize the robot's stability and prevent collisions. Regular software testing and updates
address security vulnerabilities, while a power management system optimizes battery consumption for
maximum autonomy. The webots simulation provides a realistic environment to evaluate the performance of
the Festo mobile robot equipped with different
type of SLAM, and other relevant algorithms.
Methodology

To apply the project involving Robotino and SLAM algorithms, start by familiarizing yourself with
Robotino and learning about SLAM algorithms. Choose suitable software tools or libraries for implementing
SLAM on the Robotino platform. Collect sensor data, implement the selected SLAM algorithm, and
evaluate the accuracy of the system by calculating error metrics. Use the SLAM algorithm to build and
update a map of the environment, and iterate on the implementation to improve accuracy and mapping
quality.

23
System Block Diagram

The system block diagram for the delivery robot includes a power block, sensor and camera block, SLAM
blocks, data storage block, API interface, controller block, actuating unit (motor driver), and feedback loops.
The power block provides the necessary power supply, while the sensor and camera block gather
environment information. The SLAM process and refine the data, which is stored in the data storage block
and connected to the API interface. The controller block receives information from the API interface and
data storage block to control the robot's motion through the actuating unit. Feedback loops monitor and
adjust system performance. The project focuses on developing robust networking solutions to ensure reliable
navigation and delivery, even in the event of network disconnection. In such cases, the robot relies on saved
map data and SLAM algorithms to determine its position and calculate accuracy.

Figure 5 System Block Diagram

Flow Chart

This flowchart represents the integration of SLAM and for a delivery robot in the context of a project
focused on developing a robust, autonomous delivery robot. The robot, equipped with both the SLAM
algorithm, will be capable of traveling from a home position to a target location while avoiding obstacles
and other static elements, either by

relying on network information or previously saved map data. The main goal of the project is to ensure the
reliability of the robot's navigation and delivery mission even in case of network disconnection, through the
development of robust networking solutions. The flowchart outlines the steps involved in the robot's
localization and navigation process, including initialization, network connection check, map construction
and estimation of the robot's position, refinement of the position estimate, path planning, navigation to the
target location, and adjusting the path based on new sensor information. The flowchart starts with
initializing the robot's position and map, which involves determining the starting location of the robot and
creating an initial map of the environment using prior knowledge or the robot's sensors. The initial robot
position and map data are then stored.

The next step is to check for a network connection. If the connection is available, the robot receives
additional information for localization. However, if the connection is lost or unstable, the robot relies on its
saved map data. The SLAM algorithm is then used to construct a map of the environment and estimate the
robot's position within the map. The robot then plans a path to the target location and navigates to it while
24
avoiding obstacles. If new sensor information is available, the robot adjusts its path accordingly. This
process is repeated until the target location is reached.

In conclusion, the flowchart is designed to ensure that the robot can complete its mission, even if a network
connection is lost or unstable. The focus of this project is on developing robust networking solutions and
integrating SLAM for a delivery robot to provide reliable navigation and delivery services. The SLAM
process begins with the initialization of the robot's position and a rough map of the environment. The robot
uses its onboard sensors such as odometry or laser rangefinder readings to gather information about its
surroundings. This information is then used to construct a graph representation of the environment. Graph
optimization algorithms are applied to estimate the robot's position within the environment map. As the
robot continues to move and gather more information, the map and the robot's position estimate are updated
accordingly. This process is repeated until the robot reaches its target location.

Figure 6 Flow chart

IMPLEMENTION :
Designing and Implementing the Experimental Setup
The project initially involved working with the real Robotino robot for the implementation of the SLAM
algorithm. However, due to certain challenges and limitations encountered during the early stages of the
project, a decision was made to switch to a simulation software. After extensive research and exploration of
various options, the webots simulation software was chosen as the platform to continue the project.

Webots provides a realistic and controlled virtual environment that closely mimics real-world scenarios,
allowing for thorough testing and evaluation of the robot's capabilities. It offers a wide range of features and
functionalities, including accurate physics simulation, sensor modeling, and flexible programming
interfaces, making it an ideal choice for this project.

The transition from working with the real Robotino to using webots required adapting the existing codebase
and integrating it into the simulation environment. The Robotino's hardware components, such as sensors
and actuators, were emulated within webots to replicate the robot's functionality accurately.

By leveraging webots, the project team was able to continue the development and evaluation of the SLAM
algorithm in a more controlled and efficient manner. The simulation environment provided the flexibility to
25
create various test scenarios, adjust parameters, and collect data for analysis, enabling comprehensive
validation and optimization of the SLAM algorithm's performance.

Additionally, webots offered the advantage of easy scalability, as multiple instances of the simulated
Robotino robots could be deployed simultaneously for parallel testing and comparison of different SLAM
algorithms, such as visual SLAM, graph SLAM, and ORB SLAM.

Overall, the decision to switch to webots as the simulation software brought significant benefits to the
project, including increased flexibility, scalability, and efficient development and testing processes. It
enabled the project team to overcome the challenges faced with the real Robotino and continue making
progress towards achieving the project's objectives.
This project implemented as following step :

Feature Foxy Iron Humble


Release date April 2021 December 2021 March 2022
New community
processes, features,
Improved security, Increased focus on
performance
Major features performance, and usability and developer
enhancements, tools,
stability productivity
and quality
improvements
Target audience Production robots Industrial robots Research robots
Users who need a
Users who need a Users who need a user-
secure and high-
stable and reliable friendly and productive
Recommended for performance
distribution for distribution for
distribution for
production robots research robots
industrial robots
Stability Stable More stable Less stable
Performance Good Excellent Good
Security Good Excellent Good
Usability Good Excellent Fair
Developer productivity Good Excellent Fair
Target audience Production robots of some ROS
Table 3 comparison Industrial robots
2 distributions Research robots
1.ROS2 Setup:
The Robot Operating System (ROS) is a set of software libraries and tools for building robot applications.
From drivers and state-of-the-art algorithms to powerful developer tools, ROS has the open-source tools you
need for your next robotics project.
Since ROS was started in 2007, a lot has changed in the robotics and ROS community. The goal of the ROS
2 project is to adapt to these changes, leveraging what is great about ROS 1 and improving what isn’t.Note
that ros1 will cancel in 2025 ,so when chose packages from ros ,be careful to chosse in ros2.
Ros 2 have ma
ny kind like foxy ,iron ,humble , Rolling etc .
Here is a comparison of some ROS 2 distributions to choose the suitable ros2 for your project :

I chose ROS2 (Robot Operating System 2) for my project due to its numerous advantages and the humble
qualities it possesses. Here are the reasons why I made this choice:

26
1. Flexibility and Scalability: ROS2 offers a flexible and scalable framework that caters to the diverse needs
of my project. It provides a modular architecture, allowing me to choose the specific components and
features that are most suitable for my application. Whether I'm working on a small-scale project or a large-
scale deployment, ROS2 can adapt and scale accordingly.

2. Improved Performance: ROS2 introduces several performance enhancements compared to its predecessor,
ROS1. It utilizes a more efficient communication middleware called Data Distribution Service (DDS),
which enables faster and more reliable data exchange between system components. This improved
performance is crucial for real-time and mission-critical applications where timing and reliability are
paramount.

3. Enhanced Security and Reliability: ROS2 incorporates important updates to enhance the security and
reliability of robotic systems. It introduces a more secure communication layer, supports encryption, and
implements authentication mechanisms, making it suitable for projects that require robust security measures.
Additionally, ROS2's fault tolerance features help ensure system resilience in the face of errors or failures.

4. Growing Ecosystem and Community: ROS2 has gained significant traction and is backed by a rapidly
growing community of developers, researchers, and robotics enthusiasts. This expanding ecosystem means
there are abundant resources, libraries, and tools available, making it easier to develop, test, and deploy my
project. The collaborative nature of the community also provides an opportunity to learn from experts and
receive support when encountering challenges.

5.Long-term Viability: ROS2 is designed with long-term viability in mind. Its development is supported by
Open Robotics, a non-profit organization committed to advancing open-source robotics. This ensures
ongoing development, maintenance, and support for ROS2, giving me confidence in its sustainability and
longevity.

6. Interoperability and Integration: ROS2 has improved interoperability and integration capabilities,
allowing me to seamlessly connect with a wide range of hardware, software, and robotic systems. It supports
various communication protocols, interfaces, and device drivers, enabling me to incorporate different
components into my project without extensive modifications. This interoperability simplifies integration
efforts and enables easy collaboration with other projects.

Overall, I chose ROS2 for my project because it combines the benefits of a flexible and scalable framework,
improved performance, enhanced security and reliability, a thriving community, long-term viability, and
seamless interoperability. These factors make ROS2 a powerful and humble choice for developing robust,
adaptable, and collaborative robotic systems.Also,Its most package easy to download and work with it .

How Install ROS2 humble:


I prefer using ROS2 Humble on Ubuntu directly rather than running it in a virtual box environment. While
virtual box can be useful for certain scenarios, there are a few reasons why I choose to install Ubuntu
without virtual box when working with ROS2.
Firstly, running Ubuntu directly on the host machine provides better performance and resource utilization.
Virtual box adds an additional layer of abstraction, which can impact system performance and
responsiveness, especially when working with computationally intensive tasks or real-time robotics
applications.
Secondly, installing Ubuntu without virtual box ensures smoother package management. ROS2 relies on
various packages and dependencies, and running it in a virtual box environment might introduce
compatibility issues or conflicts with the virtualization software. By installing Ubuntu directly, I can ensure
a clean and stable environment for ROS2 installation and package management.
Moreover, working directly with Ubuntu simplifies hardware integration. In a virtual box environment,
accessing hardware resources, such as USB ports or GPU acceleration, can be more challenging or limited.
By running Ubuntu natively, I have better access to hardware resources and can seamlessly integrate with
robotic hardware components or peripherals.

27
Lastly, using Ubuntu as the host operating system allows me to fully leverage the Linux ecosystem. Ubuntu
is a popular Linux distribution widely used by the ROS community, and many ROS tutorials, resources, and
packages are specifically tailored for Ubuntu. This ensures better compatibility, ease of installation, and
access to a wealth of community support.
In summary, while virtual box can be useful for certain scenarios, I prefer to install Ubuntu directly when
working with ROS2 Humble. It provides better performance, smoother package management, easier
hardware integration, and full access to the Linux ecosystem, making it an ideal choice for developing and
running ROS2 projects.

For MacBook
If you have a MacBook Like me and installing Ubuntu directly is not a feasible option for you, using
VirtualBox to run Ubuntu and ROS2 Humble is a suitable alternative. While there may be some limitations
and challenges associated with running Ubuntu in a virtual environment on a MacBook, it can still allow
you to work with ROS2 effectively. Here's why using VirtualBox on your MacBook can be a practical
choice:

1. Platform Compatibility: VirtualBox is a cross-platform virtualization software that supports various


operating systems, including macOS. This means you can run Ubuntu within a virtual machine on your
MacBook using VirtualBox without needing to dual boot or install Ubuntu directly on your hardware.

2. Isolation and Safety: Running Ubuntu within a virtual machine provides a level of isolation from your
host macOS environment. It allows you to experiment with different configurations, packages, and ROS2
setups without affecting your MacBook's primary operating system. This isolation ensures a safer
environment for testing and development.

3. Convenience and Portability: VirtualBox offers the advantage of portability. You can create snapshots or
backups of your Ubuntu virtual machine, making it easy to transfer your ROS2 development environment to
other machines if needed. It also provides the convenience of running Ubuntu alongside your macOS
applications, allowing you to switch between environments without restarting your computer.

4. Resource Management: Although running Ubuntu in a virtual machine may have some performance
overhead, modern MacBook models generally have the sufficient processing power and memory to handle
ROS2 applications within VirtualBox. By allocating the appropriate amount of resources (CPU cores, RAM)
to the virtual machine, you can optimize the performance of your ROS2 projects.

5. Compatibility with ROS2: VirtualBox provides a compatible environment for running Ubuntu and ROS2.
Many ROS2 tutorials, packages, and resources are designed to work seamlessly on Ubuntu, and VirtualBox
allows you to create a virtual Ubuntu environment that closely resembles a native installation.

While running Ubuntu and ROS2 in a virtual environment may have some limitations, using VirtualBox on
your MacBook is a practical solution that enables you to work with ROS2 and develop your projects
effectively. It allows you to leverage the capabilities of ROS2 Humble while still benefiting from the
flexibility and convenience of using a virtual machine on your MacBook.

For Windows:
ROS2 Humble is primarily designed to work on Linux-based operating systems. While Windows is not the
officially supported platform for ROS2, there are options available to run ROS2 on Windows. Here are
some important points to consider when using ROS2 Humble on Windows:

1. Windows Subsystem for Linux (WSL): One way to use ROS2 on Windows is by utilizing the Windows
Subsystem for Linux (WSL). WSL allows you to run a Linux distribution, such as Ubuntu, within a
Windows environment. By installing a compatible Linux distribution through WSL, you can then install and
use ROS2 Humble as you would on a native Linux system. However, it's important to note that not all
features and functionalities of ROS2 may be fully supported or optimized in this setup.

28
2. ROS2 Windows Native: ROS2 has been making progress in supporting Windows as a native platform.
Efforts have been made to provide official builds and support for ROS2 on Windows. It is recommended to
check the ROS2 documentation and community forums for the latest information on Windows support,
including installation instructions and compatibility considerations.

3. ROS2 Development Tools: While running ROS2 on Windows may be possible, it's important to note that
some ROS2 development tools and packages may have limited Windows compatibility. This could include
certain ROS2 packages that have dependencies on Linux-specific libraries or utilities. It may require
additional effort and troubleshooting to ensure compatibility or find suitable alternatives for Windows.

4. Performance Considerations: Running ROS2 on Windows, especially through virtualization or


compatibility layers like WSL, may introduce performance overhead compared to running it on a native
Linux system. Factors such as resource allocation, hardware compatibility, and software configurations can
impact the performance of ROS2 on Windows.

5. Community Support: The ROS2 community is active and vibrant, with developers constantly working on
improving compatibility and providing assistance for running ROS2 on Windows. Engaging with the
community forums, discussion groups, and documentation can provide valuable insights, workarounds, and
solutions for specific issues encountered while using ROS2 on Windows.

In summary, while Windows is not the officially supported platform for ROS2, it is possible to run ROS2
Humble on Windows using solutions like WSL or through Windows native support. However, it's important
to be aware of the potential limitations, compatibility challenges, and performance considerations that may
arise when using ROS2 on a non-Linux platform. Keeping up with the latest developments, seeking
community support, and carefully considering the specific requirements of your project will help you
navigate ROS2 on Windows successfully. But in my experiment using a virtual box is more stable.

Install Virtual Box and run Ubuntu on it :


On MacBook :
To install VirtualBox and run Ubuntu on MacBook, follow these steps:

1. Download VirtualBox: Visit the official VirtualBox website (https://www.virtualbox.org) and download
the version of VirtualBox suitable for macOS.

29
Figure 7 downloaded VirtualBox package

2. Install VirtualBox: Locate the downloaded VirtualBox package (.dmg file) and double-click on it to start
the installation process. Follow the on-screen instructions to complete the installation.

3. Download Ubuntu ISO: Go to the official Ubuntu website (https://ubuntu.com) and download the Ubuntu
Desktop ISO image. Choose the appropriate version based on your requirements (e.g., 64-bit, LTS).

30
Figure 8 upload ubuntu

4. Create a New Virtual Machine: Open VirtualBox, click on the "New" button to create a new virtual
machine. Give it a name (e.g., Ubuntu) and select "Linux" as the type and "Ubuntu (64-bit)" as the version.
Set the desired amount of memory (RAM) for the virtual machine, keeping in mind the system requirements
of Ubuntu.

Figure 9 create VirtualBox ,start new VirtualBox

5. Create a Virtual Hard Disk: Choose the "Create a virtual hard disk now" option and select "VDI
(VirtualBox Disk Image)" as the hard disk file type. Select "Dynamically allocated" for the storage, then
specify the size of the virtual hard disk. The recommended minimum is around 20-30 GB, depending on
your needs.

Figure 10 create VirtualBox, Add Type

6. Configure Virtual Machine Settings: With the virtual machine created, select it from the VirtualBox
Manager interface and click on "Settings." Adjust the settings as needed, including the number of CPU
cores, display settings, network configurations, and any additional devices or features you want to enable.

7. Install Ubuntu: With the virtual machine settings configured, select the virtual machine and click on
"Start" to launch it. In the VirtualBox window, click on the "Choose a virtual optical disk file" button and
31
select the Ubuntu ISO you downloaded. The virtual machine will start booting from the ISO file, and you
can follow the on-screen instructions to install Ubuntu.

8. Complete Ubuntu Installation: During the Ubuntu installation process, you'll be prompted to select
installation options, create a username and password, and configure system settings. Follow the installation
wizard until Ubuntu is successfully installed on the virtual machine.

Figure 11 add Ubuntu in VirtualBox

9. Install Guest Additions: After Ubuntu installation, it is recommended to install VirtualBox Guest
Additions. In the VirtualBox window, go to the "Devices" menu and select "Insert Guest Additions CD
image." Follow the on-screen instructions within Ubuntu to install the Guest Additions, which provide
additional features and better integration between the host and guest systems.

10. Start Ubuntu: Once the Guest Additions are installed, restart the virtual machine. Ubuntu should now
start up within the VirtualBox window, and you can log in to your Ubuntu desktop environment.

Congratulations! You have successfully installed VirtualBox and run Ubuntu on your MacBook using a
virtual machine. You can now use Ubuntu within VirtualBox for various purposes, including running ROS2
Humble and developing your projects.

Figure 12 Ubuntu within VirtualBox

32
On Windows :
To install VirtualBox and run Ubuntu on your Windows machine, follow these steps:

1. Download VirtualBox: Visit the official VirtualBox website (https://www.virtualbox.org) and download
the version of VirtualBox suitable for Windows. Choose the installer based on your operating system
version (e.g., Windows 10, 64-bit).

2. Install VirtualBox: Locate the downloaded VirtualBox executable (.exe) file and double-click on it to start
the installation process. Follow the on-screen instructions to complete the installation. You may need
administrator privileges to install VirtualBox.

3. Download Ubuntu ISO: Go to the official Ubuntu website (https://ubuntu.com) and download the Ubuntu
Desktop ISO image. Choose the appropriate version based on your requirements (e.g., 64-bit, LTS).

4. Create a New Virtual Machine: Open VirtualBox, click on the "New" button to create a new virtual
machine. Give it a name (e.g., Ubuntu) and select "Linux" as the type and "Ubuntu (64-bit)" as the version.

5. Set Memory and Storage: Assign an appropriate amount of memory (RAM) for the virtual machine. The
recommended minimum for Ubuntu is around 2 GB, but more is preferable for smoother performance. Next,
create a virtual hard disk by selecting "Create a virtual hard disk now" and choosing "VDI (VirtualBox Disk
Image)" as the file type. Select "Dynamically allocated" for the storage option.

6. Configure Virtual Machine Settings: With the virtual machine created, select it from the VirtualBox
Manager interface and click on "Settings." Adjust the settings as needed, including the number of CPU
cores, display settings, network configurations, and any additional devices or features you want to enable.

7. Mount Ubuntu ISO: In the VirtualBox Manager, select the virtual machine you created and click on
"Start." In the pop-up window, browse and select the Ubuntu ISO file you downloaded in Step 3. This will
allow the virtual machine to boot from the Ubuntu ISO.

8. Install Ubuntu: The virtual machine will start booting from the Ubuntu ISO, and the Ubuntu installation
process will begin. Follow the on-screen instructions to install Ubuntu, including selecting installation
options, creating a username and password, and configuring system settings. Choose the installation type
that suits your needs (e.g., erase disk and install Ubuntu or manual partitioning).

9. Complete Ubuntu Installation: After the installation completes, restart the virtual machine. Ubuntu should
now start up within the VirtualBox window, and you can log in to your Ubuntu desktop environment.

10. Install Guest Additions: It is recommended to install VirtualBox Guest Additions to enhance the
functionality and integration between the host and guest systems. In the VirtualBox window, go to the
"Devices" menu and select "Insert Guest Additions CD image." Follow the on-screen instructions within
Ubuntu to install the Guest Additions.

Congratulations! You have successfully installed VirtualBox and run Ubuntu on your Windows machine
using a virtual machine. You can now utilize Ubuntu within VirtualBox for various purposes, including
running ROS2 Humble and developing your projects.

Install ros2 Humble :


To download ROS2 Humble and install it using Debian packages, follow these steps:

1. Visit the ROS website: Open a web browser and go to the official ROS website at https://www.ros.org/.

33
2. Navigate to ROS2 Humble: On the ROS homepage, navigate to the ROS2 section. Look for the version
labeled "ROS2 Humble" or navigate directly to the ROS2 Humble page if provided.

3. Choose Installation Method: Once on the ROS2 Humble page, you will find multiple installation
methods. Since you want to use Debian packages, locate the Debian Packages section.

4. Select Appropriate Distribution: In the Debian Packages section, you will see a list of supported
distributions. Choose the Debian distribution that matches your operating system (e.g., Ubuntu, Debian).

5. Follow the Installation Instructions: Under the chosen distribution, you will find step-by-step instructions
for installing ROS2 Humble using Debian packages. The instructions typically include adding the ROS
repository to your package sources and installing the necessary packages.

a. Add ROS Repository: Follow the provided instructions to add the ROS repository to your package
sources. This typically involves running commands in your terminal to add the repository key and set up the
appropriate package sources.

b. Update Package Lists: After adding the ROS repository, update your package lists by running the
command `sudo apt update` in your terminal. This ensures that your system recognizes the newly added
repository.

c. Install ROS2 Humble Packages : Once the package lists are updated, you can proceed to install ROS2
Humble packages. Follow the instructions to run the appropriate command in your terminal to install the
desired ROS2 packages.

6. Verify the Installation : After the installation is complete, you can verify that ROS2 Humble is properly
installed by opening a new terminal window and running the command `ros2 --version`. This should display
the installed ROS2 version, confirming a successful installation.

To install ros2 humble packages :


Set locale

Make sure you have a locale which supports UTF-8 . If you are in a minimal environment (such as a docker
container), the locale may be something minimal like POSIX . We test with the following settings. However,
it should be fine if you’re using a different UTF-8 supported locale.

locale # check for UTF-8

sudo apt update && sudo apt install locales


sudo locale-gen en_US en_US.UTF-8
sudo update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8
export LANG=en_US.UTF-8

locale # verify settings

Setup Sources

You will need to add the ROS 2 apt repository to your system.

34
First ensure that the Ubuntu Universe repository is enabled.

sudo apt install software-properties-common


sudo add-apt-repository universe

Now add the ROS 2 GPG key with apt.

sudo apt update && sudo apt install curl -y


sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg

Then add the repository to your sources list.

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg]


http://packages.ros.org/ros2/ubuntu $(. /etc/os-release && echo $UBUNTU_CODENAME) main" | sudo tee
/etc/apt/sources.list.d/ros2.list > /dev/null

Install ROS 2 packages


Update your apt repository caches after setting up the repositories.

sudo apt update

ROS 2 packages are built on frequently updated Ubuntu systems. It is always recommended that you ensure
your system is up to date before installing new packages.

sudo apt upgrade

sudo apt install ros-humble-desktop

ROS-Base Install (Bare Bones): Communication libraries, message packages, command line tools. No GUI
tools.

sudo apt install ros-humble-ros-base

Development tools: Compilers and other tools to build ROS packages

sudo apt install ros-dev-tools

Environment setup
Sourcing the setup script
Set up your environment by sourcing the following file.
35
# Replace ".bash" with your shell if you're not using bash
# Possible values are: setup.bash, setup.sh, setup.zsh
source /opt/ros/humble/setup.bash

Try some examples


Talker-listener
If you installed ros-humble-desktop above you can try some examples.

In one terminal, source the setup file and then run a C++ talker :

source /opt/ros/humble/setup.bash
ros2 run demo_nodes_cpp talker

In another terminal source the setup file and then run a Python listener :

source /opt/ros/humble/setup.bash
ros2 run demo_nodes_py listener

2.Install suitable Robotics simulation software :


In this project, I will compare a selection of popular robot simulators available in the market. It is important
to note that the choice of the best simulator depends on individual needs and requirements. As part of our
collaboration, I will evaluate these simulators and download them for further analysis. This comparison is
just a preliminary step in our cooperative effort to find the most suitable simulator for our specific project.
The comparison:
• Webots is a popular open-source simulator that is used for a wide variety of applications, including
education, research, and product development. It is known for its realistic physics engine and its
large library of pre-built robots and environments.
• CoppeliaSim is another popular open-source simulator that is known for its ease of use and its wide
range of features. It is often used in education and research, and it also has a commercial version that
is used by industry.
• Gazebo is a popular open-source simulator that is known for its scalability and its ability to simulate
complex environments. It is often used in research and product development, and it is also used by
NASA and the U.S. Army.
• Microsoft Robotics Developer Studio is a commercial simulator that is used for developing and
testing robotic applications. It includes a number of features that make it easy to develop and deploy
robotic applications, such as a graphical user interface and a library of pre-built components.
• SimSpark is an open-source simulator that is known for its flexibility and its ability to simulate a
wide variety of robots and environments. It is often used in research and education, and it is also
used by some commercial companies.

Programming
Simulator License Physics engine Features Cons
language
Realistic -Can be slow
physics, large for complex
Webots Open-source C, C++, Python Bullet library of pre- simulations

36
built robots and - Not as
environments scalable as
some other
simulators
- Physics
engine not as
Easy to use, realistic as
wide range of some
features, - Not as
commercial scalable as
version some other
CoppeliaSim Open-source C++, Python Bullet available simulators
-Can be
difficult to use
Scalable, ability - Physics
to simulate engine not as
complex realistic as
Gazebo Open-source C++, Python ODE, Bullet environments some others
Graphical user Not as flexible
Microsoft interface, as some other
Robotics library of pre- simulators
Developer C#, Visual built
Studio Commercial Basic .NET PhysX components
-Not as easy to
use as some
other
Flexible, ability simulators
to simulate a - Physics
wide variety of engine not as
robots and realistic as
SimSpark Open-source C++, Python ODE, Bullet environments some others
Table 4 Robotics simulation software

I have chosen to use the Webots software for my project.

Install Webots:
Webots is a widely used open-source robotics simulation software developed by Cyberbotics. It provides a
virtual environment for simulating and testing robots in various scenarios, allowing researchers and
developers to evaluate robot designs, algorithms, and behaviors.

To install Webots, follow these steps:

1. Visit the Webots website: Open a web browser and go to the official Webots website at
https://www.cyberbotics.com/.

2. Download Webots: On the Webots homepage, locate the "Download" section. Choose the appropriate
version of Webots based on your operating system (Windows, macOS, or Linux) and click on the
corresponding download link.

3. Choose Edition: Webots offers both a free version and a commercial version. Select the edition that suits
your requirements and click on the download link.

37
4. Install Webots: Once the download is complete, locate the downloaded installation package and run it.
The installation process will vary depending on your operating system.

- Windows: Double-click the downloaded .exe file and follow the on-screen instructions to complete the
installation. You may need to specify the installation directory and agree to the license terms.

- macOS: Open the downloaded .dmg file and drag the Webots application to the Applications folder. You
can then launch Webots from the Applications folder or using Spotlight search.

- Linux: Open a terminal and navigate to the directory where the downloaded installation package is
located. Run the installation command appropriate for your distribution. For example, on Ubuntu, you can
use `sudo dpkg -i webots-x.y.z-amd64.deb`, replacing "x.y.z" with the specific version you downloaded.

5. Run Webots: Once the installation is complete, you can run Webots by locating the application icon (in
the Start menu on Windows, the Applications folder on macOS, or the application launcher on Linux) and
clicking on it.

6. Explore Webots: Upon launching Webots, you will be greeted with the main user interface. Familiarize
yourself with the features and functionality of Webots by exploring the provided examples, documentation,
and tutorials available on the Webots website.

Create Webots world :


To use Webots for simulating the Robotino robot and creating a world for it, follow these steps:

1. Familiarize Yourself with Webots: Once Webots is installed, take some time to explore the user interface
and understand its various components. Familiarize yourself with the basic functionalities and navigation
within the software.

2. Create a New World: Open Webots and start a new project. Choose the appropriate template or create a
blank world. This will serve as the environment for simulating the Robotino robot.

3. Import the Robot Model: In Webots, import the Robotino robot model or create a custom model if
needed. Webots supports various file formats such as URDF, PROTO, and VRML. Ensure that the robot
model is accurately represented in the simulation. But we don't need it because robotinto 3 is available in
webots.

4. Configure Robot Properties: Set the necessary properties and parameters for the Robotino robot within
Webots. This may include dimensions, joint limits, sensor configurations, and control algorithms. Refer to
the Robotino documentation or specifications for the required information.

5. Add Sensors and Actuators: Attach sensors and actuators to the Robotino robot in Webots. This allows
the robot to perceive its environment and interact with it. Examples of sensors include cameras, proximity
sensors, and encoders, while actuators can include motors and grippers. Also, Be careful with DEF. name
for sensors and camera and what name sensors in the robot controller.

6. Design the World: Design the virtual world in Webots where the Robotino robot will operate. This
involves placing objects, obstacles, and landmarks that the robot will encounter during the simulation.
Customize the appearance and properties of the world elements to suit your specific project requirements.

7. Implement Robot Control: Develop the control algorithms and logic for the Robotino robot within
Webots. This can be achieved using the built-in controller languages supported by Webots, such as C, C++,

38
Python, or MATLAB. Implement the necessary functionalities for the robot's navigation, perception, and
interaction with the environment.

8. Simulate and Evaluate: Once the world and robot control are set up, initiate the simulation in Webots.
Observe the behavior of the Robotino robot within the virtual environment, analyze its performance, and
evaluate the effectiveness of your control algorithms. Make necessary adjustments and improvements as
needed.

9. Iterate and Refine: Continue iterating and refining the robot's behavior and the virtual world in Webots
based on your project goals and requirements. Test different scenarios, fine-tune parameters, and enhance
the robot's capabilities to achieve desired results.

Remember to refer to the Webots documentation, tutorials, and community resources for more detailed
instructions and assistance throughout the process. Webots provides a comprehensive set of tools and
functionalities to simulate and evaluate the Robotino robot in a virtual environment, enabling you to refine
your robotic applications before deploying them in the physical world.

3.ROS2-Webots :
To connect ROS2 Humble with Webots and download the Webots ROS package, follow these steps:

1. Create a ROS2 Workspace: Set up a ROS2 workspace where you will work with the ROS2 packages.
Open a terminal and use the following command to create a workspace directory:
mkdir -p ~/ros2_humble_ws/src

2. Build and Source the Workspace: Build the workspace by running the following commands in the
terminal:

cd ~/ros2_humble_ws
colcon build --symlink-install
source install/setup.bash

3. Download the Webots ROS Package: Download the Webots ROS package from the Webots GitHub
repository. Open a terminal and navigate to your ROS2 workspace's `src` directory:
cd ~/ros2_humble_ws/src

Clone the Webots ROS package repository using the following command:
git clone https://github.com/cyberbotics/webots_ros.git

4. Build the ROS Package: Build the Webots ROS package by running the following commands in the
terminal:

cd ~/ros2_humble_ws
colcon build --symlink-install --packages-select webots_ros

This will compile and install the Webots ROS package in your ROS2 Humble workspace.

5. Configure ROS2 Environment: Configure your ROS2 environment to include the newly installed Webots
ROS package. Run the following command in the terminal:

source ~/ros2_humble_ws/install/setup.bash

This ensures that ROS2 can find the Webots ROS package and its associated resources.

39
6. Launch Webots with ROS2 Bridge: Launch Webots with the ROS2 bridge by running the following
command in the terminal:

webots --mode=ros2

This command starts Webots with the ROS2 bridge enabled, allowing communication between ROS2 and
Webots.

7. Publish and Subscribe to Topics: With the Webots ROS package and ROS2 bridge set up, you can now
publish and subscribe to ROS2 topics from within Webots. Create a Webots controller that interfaces with
the ROS2 bridge to publish and subscribe to topics using the ROS2 APIs. The Webots ROS package
provides examples and templates that can be used as a starting point for developing your own controller.

By following these steps, you can connect ROS2 Humble with Webots and download the necessary Webots
ROS package. This integration enables seamless communication and data exchange between ROS2 and the
simulated environment in Webots.

For further details and specific use cases or any type of problem, refer to the official Webots documentation
(https://cyberbotics.com/doc/guide/ros2-introduction) and the ROS2 documentation (https://docs.ros.org/).
These resources provide comprehensive information and examples to help you integrate ROS2 with Webots
effectively using the Webots ROS package.

4.Build Graph SLAM map and localization :


To build a Graph SLAM map and enable localization using Robotino, you will need to perform the
following steps:

1. Build Bridge between ROS1 and ROS2: Since the Robotino package is built using the catkin build
system, which is compatible with ROS1, you'll need to establish a bridge between ROS1 and ROS2. This
bridge enables communication and data exchange between the two frameworks. You can use the
`ros1_bridge` package provided by the ROS2 ecosystem to achieve this. Follow the ROS2 documentation on
how to install and configure the `ros1_bridge` package.

To build a bridge between ROS1 and ROS2, follow these steps:

1. Install ROS1 and ROS2: Ensure that both ROS1 and ROS2 are installed on your system. Follow the
official ROS1 and ROS2 installation instructions for your specific operating system.

2. Create ROS1 Workspace: Set up a ROS1 workspace where you will build the ROS1-ROS2 bridge. Open
a terminal and use the following command to create a workspace directory:
mkdir -p ~/ros1_bridge_ws/src

3. Clone the ROS1-ROS2 Bridge Repository: Navigate to the `src` directory of your ROS1 workspace
(`~/ros1_bridge_ws/src`) and clone the `ros1_bridge` repository from the ROS2 GitHub repository:

cd ~/ros1_bridge_ws/src
git clone https://github.com/ros2/ros1_bridge.git

4. Build the ROS1-ROS2 Bridge: Return to the root of your ROS1 workspace (`~/ros1_bridge_ws`) and
build the ROS1-ROS2 bridge using the following command:

cd ~/ros1_bridge_ws
40
colcon build --symlink-install --packages-select ros1_bridge

This command compiles the ROS1-ROS2 bridge and generates the necessary installation files.

5. Source the Workspace: Source the setup file of your ROS1 workspace to make the ROS1-ROS2 bridge
available in your environment. Run the following command in the terminal:

source ~/ros1_bridge_ws/install/setup.bash

This ensures that the ROS1-ROS2 bridge is properly sourced and available for use.

6. Launch the ROS1-ROS2 Bridge: With the bridge built and sourced, you can now launch the ROS1-ROS2
bridge by running the following command:

ros2 run ros1_bridge dynamic_bridge

This command starts the dynamic bridge, enabling communication between ROS1 and ROS2 nodes.

7. Test the Bridge: To verify that the bridge is functioning correctly, you can run ROS1 nodes that publish
messages and observe them being received by ROS2 nodes, and vice versa. Launch ROS1 and ROS2 nodes,
making sure they communicate with each other through topics, services, or actions.

For example, you can publish a message on a ROS1 topic and confirm that it is received on the
corresponding ROS2 topic, or vice versa.

By following these steps, you can build and launch the ROS1-ROS2 bridge, allowing communication and
data exchange between ROS1 and ROS2 nodes. It enables interoperability between the two frameworks,
allowing you to leverage packages and nodes from both ROS1 and ROS2 ecosystems.

2. Install Robotino ROS Package: Begin by installing the Robotino ROS package, which allows
communication between Robotino and ROS. Follow the instructions provided by the Robotino manufacturer
to download and install the necessary package.

To install the Robotino ROS package, follow these steps:

1. Download the Robotino ROS Package: Visit the Robotino website or contact the Robotino manufacturer
to obtain the Robotino ROS package. They usually provide a downloadable package or repository that
contains the necessary files.

2. Create a ROS Workspace: Set up a ROS workspace where you will install and build the Robotino ROS
package. Open a terminal and use the following command to create a workspace directory:

mkdir -p ~/robotino_ws/src

3. Copy the Robotino ROS Package: Copy or move the downloaded Robotino ROS package into the `src`
directory of your ROS workspace (`~/robotino_ws/src`).

4. Build the Workspace**: Navigate to your ROS workspace directory (`~/robotino_ws`) in the terminal and
build the workspace using the following command:

catkin_make

41
This command compiles the Robotino ROS package and any other packages present in your workspace.

5. Source the Workspace: After successfully building the workspace, source the setup file to add the
Robotino ROS package to your ROS environment. Run the following command in the terminal:

source ~/robotino_ws/devel/setup.bash

This ensures that ROS can find the Robotino ROS package and its associated resources.

6. Test the Installation: You can now verify if the Robotino ROS package is properly installed by launching
a sample Robotino ROS node or executing the available Robotino ROS examples. Consult the Robotino
ROS documentation or the provided examples for more information on how to use the package and interact
with Robotino.

3. Upload Required Libraries: If there are any additional libraries or dependencies required by the Robotino
package, ensure they are uploaded to your ROS2 Humble workspace. This ensures that the necessary
components are available for building and running the Robotino ROS package with ROS2.

4. Configure ROS1 and ROS2 Environment: Set up your ROS1 and ROS2 environments to include the
necessary paths and packages for both frameworks. This allows ROS1 and ROS2 to work together
seamlessly. Make sure to source the appropriate setup files for each environment before proceeding.

5. Build the Workspace: Navigate to your ROS2 Humble workspace directory and build the workspace
using the following command:

colcon build --symlink-install

This command compiles the packages in your workspace and creates the necessary symbolic links for
installation.

6. Launch Robotino and ROS Nodes: Launch the Robotino hardware and the required ROS nodes for Graph
SLAM and localization. Make sure to include the necessary launch files and configurations specific to your
application. This allows Robotino to start sending sensor data and receive commands from the ROS nodes.

7. Implement Graph SLAM and Localization: Develop the Graph SLAM and localization algorithms using
ROS packages such as `gmapping` or `cartographer`. These packages provide tools and libraries for
mapping the environment and estimating the robot's pose within the map. Configure and tune the parameters
according to your specific requirements and environment.

8. Evaluate and Refine: Test the Graph SLAM and localization algorithms by running Robotino in different
environments and observing the generated maps and robot localization accuracy. Analyze the results, iterate
on the algorithms if needed, and fine-tune the parameters to improve the mapping and localization
performance.

By following these steps, you can build a Graph SLAM map and enable localization using Robotino in
conjunction with ROS1 and ROS2. The bridge between ROS1 and ROS2 allows you to leverage the
Robotino ROS package built with Catkin in the ROS2 Humble - webots environment. With the appropriate
libraries and dependencies uploaded and the ROS environment configured correctly, you can develop and
deploy advanced mapping and localization capabilities for your Robotino robot.

Install RVIZ ros2

42
RViz (ROS Visualization) is a powerful visualization tool within the Robot Operating System (ROS)
ecosystem. It provides a graphical interface for visualizing and interacting with various types of data
generated by robots or simulations. With RViz, users can easily visualize sensor data, robot models,
trajectories, maps, and more, making it an essential tool for robot development, debugging, and analysis.
RViz supports the visualization of point clouds, laser scans, images, 3D models, and robot poses, allowing
users to configure displays for different data types. It seamlessly integrates with the ROS ecosystem,
enabling users to subscribe to and visualize data published on ROS topics. RViz offers interactivity,
allowing for object selection, manipulation, and camera control, empowering users to navigate and explore
the 3D environment. Configuration settings can be saved for easy reuse, making it convenient when working
with multiple robots or different visualization requirements. Overall, RViz enhances the understanding and
visualization of complex robot systems, aiding in algorithm debugging, sensor data verification, motion
analysis, and navigation strategy validation.

To install RViz in ROS2 Humble and connect it with Webots, you can follow these instructions:

1. Create and Build a ROS2 Workspace: Create a new directory for your ROS2 workspace, if you haven't
already, and navigate to it in the terminal. Then, use the following command to create a new workspace:

mkdir -p ~/ros2_ws/src
cd ~/ros2_ws

2. Build the ROS2 Workspace: Build the ROS2 workspace using the following command:

colcon build

3. Install RViz: RViz is not included in the ROS2 Humble distribution by default, but you can install it from
the `ros2/rviz` GitHub repository. Clone the repository and build RViz by executing the following
commands in your workspace directory:

cd ~/ros2_ws/src
git clone https://github.com/ros2/rviz.git
cd ..
colcon build --symlink-install

4. Launch RViz: To launch RViz, use the following command:

source install/setup.bash
rviz2

5. Connect RViz with Webots: To connect RViz with Webots, you need to establish a communication
bridge between them. One way to achieve this is by publishing the necessary sensor data from Webots and
subscribing to that data in RViz. Here are the general steps to accomplish this:

- In Webots, modify your robot controller or simulation code to publish sensor data such as laser scans,
point clouds, or odometry information to appropriate ROS2 topics using the `rclcpp` library.

- In RViz, create the necessary visualization configurations to display the sensor data received from
Webots. This typically involves adding LaserScan, PointCloud2, or PoseArray displays and configuring
them to subscribe to the corresponding ROS2 topics.
43
- Ensure that both Webots and RViz are running simultaneously, and the ROS2 communication bridge is
established.

6. Verify Data Visualization: After connecting RViz with Webots, you should be able to visualize the sensor
data from Webots in RViz. Ensure that the published sensor data is correctly received and displayed in RViz
according to your visualization configurations.

Calculate the accuracy of mapping and localization:


To calculate the accuracy of mapping and localization using ROS2 after building a Graph SLAM system,
you can employ various evaluation metrics and techniques. Here are some steps to consider:

1. Ground Truth Data: Obtain ground truth data for the environment or scenario in which you performed the
mapping and localization. This data provides the true positions and maps that can be used as a reference for
comparison.

2. Pose Comparison: Compare the estimated poses of the robot generated by your Graph SLAM system with
the ground truth poses. Calculate metrics such as Root Mean Square Error (RMSE) or Absolute Trajectory
Error (ATE) to quantify the positional discrepancies between the estimated poses and ground truth.

3. Map Comparison: Compare the generated map from your Graph SLAM system with the ground truth
map. Metrics like the Intersection over Union (IoU) or pixel-wise comparison can be used for map
evaluation. These metrics measure the overlap and similarity between the estimated map and the ground
truth map.

4. Localization Accuracy: Assess the accuracy of the robot's localization by analyzing the error in estimating
its position. Calculate metrics such as the positional error or angular error to quantify the localization
accuracy. You can compare the estimated position with the ground truth position at different time intervals
or specific points in the trajectory.

5. Statistical Analysis: Perform statistical analysis on the collected data to understand the distribution of
errors and accuracy metrics. Calculate mean, standard deviation, and confidence intervals to gain insights
into the overall performance of the Graph SLAM system.

6. Visualization: Visualize the results using tools like RViz or custom visualization scripts to observe the
discrepancies between estimated poses and ground truth, as well as the differences between the estimated
map and ground truth map. Visualization helps in identifying patterns and areas of improvement.

7. Iterate and Refine: Analyze the evaluation results and identify areas where the mapping and localization
accuracy can be improved. Fine-tune parameters, adjust algorithms, or consider using different sensors or
sensor fusion techniques to enhance the accuracy.

It's important to note that the choice of evaluation metrics and techniques may vary depending on the
specific requirements and characteristics of your Graph SLAM system and the application domain.

the script that demonstrates how to calculate the Root Mean Square Error (RMSE) for the positional
accuracy of robot localization using ROS2 in the Ubuntu terminal:

#!/bin/bash

# Calculate RMSE for positional accuracy

# Set the path to the estimated poses file

44
estimated_poses_file="/path/to/estimated_poses.txt" # change depend on fille location

# Set the path to the ground truth poses file


ground_truth_poses_file="/path/to/ground_truth_poses.txt"

# Read estimated poses from file into an array


estimated_poses=()
while IFS= read -r line; do
estimated_poses+=("$line")
done < "$estimated_poses_file"

# Read ground truth poses from file into an array


ground_truth_poses=()
while IFS= read -r line; do
ground_truth_poses+=("$line")
done < "$ground_truth_poses_file"

# Calculate RMSE
squared_errors=0.0
num_poses=${#estimated_poses[@]}

for ((i=0; i<num_poses; i++)); do


estimated_pose="${estimated_poses[$i]}"
ground_truth_pose="${ground_truth_poses[$i]}"

# Split estimated and ground truth poses into x and y coordinates


estimated_x=$(echo "$estimated_pose" | cut -d' ' -f1)
estimated_y=$(echo "$estimated_pose" | cut -d' ' -f2)

ground_truth_x=$(echo "$ground_truth_pose" | cut -d' ' -f1)


ground_truth_y=$(echo "$ground_truth_pose" | cut -d' ' -f2)

# Calculate squared error for the current pose


error_x=$(echo "$estimated_x - $ground_truth_x" | bc -l)
error_y=$(echo "$estimated_y - $ground_truth_y" | bc -l)
squared_error=$(echo "($error_x * $error_x) + ($error_y * $error_y)" | bc -l)

# Add squared error to the total


squared_errors=$(echo "$squared_errors + $squared_error" | bc -l)
done

mse=$(echo "$squared_errors / $num_poses" | bc -l)


rmse=$(echo "sqrt($mse)" | bc -l)

echo "RMSE: $rmse"

4.Build ORB Visual SLAM:

You can follow these steps to build a map and perform localization using ORB-SLAM for Robotino in
ROS2 and Webots:

Step 1: Set up the Environment


1. Create a ROS2 workspace and build it using the `colcon` command.

45
2. Install the necessary packages for ORB-SLAM, such as `geometry_msgs`, `nav_msgs`, `sensor_msgs`,
and `tf2_ros`.

Step 2: Install ORB-SLAM Tool Package


1. Download the ORB-SLAM tool package, which provides the necessary implementation for ORB-SLAM.
2. Place the package in your ROS2 workspace's source directory (`src`).
3. Build the package using the `colcon` command.

Step 3: Configure Webots Simulation


1. Set up a Webots simulation environment with Robotino.
2. Modify the Robotino controller code in Webots to publish sensor data (e.g., camera images) and
odometry information as ROS2 topics using the `rclcpp` library.

Step 4: Run the ORB-SLAM Node


1. Launch the ROS2 nodes for ORB-SLAM, which includes subscribing to the camera image and odometry
topics from Webots.
2. Configure the ORB-SLAM node parameters, such as camera calibration and settings, in the launch file or
as command-line arguments.

Step 5: Record Data for Map Building


1. Start recording the sensor data and odometry information published by Webots and received by the ORB-
SLAM node.
2. Move the Robotino around the environment to capture diverse scenes and viewpoints.

Step 6: Build the Map


1. Process the recorded sensor data using the ORB-SLAM algorithm to build the map. The ORB-SLAM
algorithm extracts features from camera images and performs visual odometry to estimate the robot's motion
and construct a 3D map.
2. Save the generated map in an appropriate format (e.g., point cloud, occupancy grid) for later use.

Step 7: Perform Localization


1. Run the ORB-SLAM node again, this time providing the previously built map for localization.
2. Provide the current sensor data from Webots to the ORB-SLAM node to estimate the robot's position and
orientation within the map.

Step 8: Evaluate Accuracy


1. To evaluate the accuracy of map building and localization, you can compare the estimated robot poses
from ORB-SLAM with the ground truth poses if available.
2. Calculate relevant metrics, such as translation error and rotation error, to quantify the accuracy of map
building and localization.

5.Build Fast SLAM:

46
Steps need to build fast slam :

Step 1: Set up the Environment


1. Create a ROS2 workspace and build it using the `colcon` command.
2. Install the necessary packages for FastSLAM, such as `geometry_msgs`, `nav_msgs`,
`sensor_msgs`, and `tf2_ros`.

Step 2: Configure Webots Simulation


1. Set up a Webots simulation environment with Robotino.
2. Modify the Robotino controller code in Webots to publish sensor data (e.g., distance sensor,
position sensor, gyro) as ROS2 topics using the `rclcpp` library.

Step 3: Install Fast SLAM Package


1. Download the FastSLAM package, which provides the implementation for FastSLAM
algorithm.(you ca use https://github.com/dexterduck/fastslam )
2. Place the package in your ROS2 workspace's source directory (`src`).
3. Build the package using the `colcon` command.

Step 4: Run the FastSLAM Node


1. Launch the ROS2 nodes for FastSLAM, which includes subscribing to the sensor data topics
from Webots.
2. Configure the FastSLAM node parameters, such as particle count and motion model, in the
launch file or as command-line arguments.

Step 5: Record Data for Map Building


1. Start recording the sensor data published by Webots and received by the FastSLAM node.
2. Move the Robotino around the environment to capture sensor measurements and odometry
information.

Step 6: Build the Map


1. Process the recorded sensor data using the FastSLAM algorithm to build the map. FastSLAM
uses particle filtering to estimate the robot's pose and map concurrently.
2. Save the generated map in an appropriate format (e.g., occupancy grid, point cloud) for later use.

Step 7: Perform Localization


1. Run the FastSLAM node again, this time providing the previously built map for localization.
2. Provide the current sensor data from Webots to the FastSLAM node to estimate the robot's
position and orientation within the map.

Step 8: Evaluate Accuracy


1. To evaluate the accuracy of map building and localization, you can compare the estimated robot
poses from FastSLAM with the ground truth poses if available.
2. Calculate relevant metrics, such as position error and orientation error, to quantify the accuracy of
map building and localization.

6.Build EKF SLAM:


47
Install MRPT :
About MRPT :
The Mobile Robot Programming Toolkit (MRPT) is an open-source C++ library that offers a wide range of
algorithms and tools for mobile robotics. It is designed to be platform-independent and provides
comprehensive functionalities for robot perception, mapping, localization, path planning, and more. With
MRPT, developers have access to a powerful set of features to enhance their mobile robot applications.

MRPT includes various SLAM (Simultaneous Localization and Mapping) algorithms, such as EKF-SLAM,
RBPF-SLAM, and ICP-SLAM, enabling robots to map their environments while estimating their own pose
accurately. The library also supports different localization methods, including Monte Carlo Localization
(MCL) based on particle filters, along with tools for data association and landmark-based localization.

In terms of sensors and perception, MRPT supports a wide array of sensors commonly used in robotics, such
as laser range finders, cameras, and inertial measurement units (IMUs). It provides efficient algorithms for
sensor fusion, feature extraction, point cloud processing, and other essential tasks related to sensor data
processing.

For path planning and navigation, MRPT offers algorithms like A* and D* for finding optimal paths in
static environments. It provides tools for robot navigation, obstacle avoidance, and trajectory planning,
assisting in smooth and efficient robot motion.

MRPT also includes visual simulators, such as the MRPT Scene Viewer, which allows users to visualize and
interact with simulated robot environments. These simulators prove to be valuable for testing algorithms,
simulating robot behavior, and developing robotic applications.

Another noteworthy capability of MRPT is its support for GraphSLAM, a technique that models the
environment as a graph and optimizes the robot's trajectory and map simultaneously. GraphSLAM in MRPT
enables loop closure detection and correction, enhancing mapping accuracy and robustness.

Steps to install MRPT :


To install MRPT on Ubuntu, you can follow these steps:

1. Open a terminal on your Ubuntu system.

2. Add the MRPT repository to your package sources by executing the following command:

sudo add-apt-repository ppa:joseluisblancoc/mrpt

3. Update the package list to include the MRPT repository by running:

sudo apt update

4. Install MRPT by executing the following command:

sudo apt install libmrpt-dev

5. During the installation, you may be prompted to confirm the download and installation of additional
dependencies. Enter 'Y' to proceed.

6. Once the installation is complete, you can verify if MRPT is installed correctly by running the following
command to display the MRPT version:

mrpt-config --version
48
Steps to Build EKF Slam:
1. Set up a ROS2 workspace: Create a new ROS2 workspace where you will build and run your ROS2
packages. Open a terminal and execute the following commands:

mkdir -p ~/ros2_ws/src
cd ~/ros2_ws/src

2. Clone the necessary repositories: Clone the required repositories for the 2D SLAM demo and related
packages into your ROS2 workspace's src directory. For example:

git clone https://github.com/ros-perception/openslam_gmapping.git


git clone https://github.com/ros2/cartographer.git

3. Build the packages: Navigate to the root of your ROS2 workspace and build the packages using the
following commands:

cd ~/ros2_ws
colcon build --symlink-install

4. Configure the ROS2 environment: Set up the necessary environment variables to run ROS2 by executing
the following command in the terminal:

source ~/ros2_ws/install/setup.bash

5. Prepare the dataset: Obtain the dataset you want to use for 2D SLAM, map building, and localization.
Ensure that the dataset is compatible with the 2D SLAM demo software.

6. Launch the 2D SLAM demo: In a terminal, navigate to the ROS2 workspace's root directory and launch
the 2D SLAM demo using the following command:

ros2 launch openslam_gmapping slam.launch.py

7. Play the dataset: In another terminal, play the dataset using the ROS2 `ros2 bag` command. For example:

ros2 bag play /path/to/your/dataset.bag

8. Visualize the results: Use RViz or any other ROS2 visualization tool to observe the 2D SLAM results,
map building, and localization. You can visualize the map, robot trajectory, and estimated pose in real-time.

Analysis of Errors and Failures:


1. Transition from real Robotino to simulation:
The need to switch from the real Robotino to simulation software may arise due to various reasons, such
as the real robot encountering technical issues or limitations. This transition requires adapting the existing
codebase and integrating it into the simulation environment.
Solution:
When transitioning from a real Robotino to simulation, using simulation software like webots can offer a
controlled and efficient environment for development and testing. Emulating the hardware components of
the Robotino within the simulation software allows for continued evaluation and optimization of algorithms,
such as the SLAM algorithm.

2. Compatibility issues with macOS:


Webots does not officially support macOS, which can pose a problem for users with macOS devices. This
can lead to operational difficulties and hinder the execution of Webots simulations on macOS systems.
49
Solution:
To address this issue, one possible solution is to upload a virtual machine or utilize a dual-boot setup with
Windows or Linux on the macOS device. By running Windows or Linux, users can install and use Webots
on the alternative operating system, thus overcoming the compatibility issue.

3. Limited availability of simulation software for Robotino:


Finding suitable simulation software specifically designed for the Robotino robot can be challenging.
Many simulation software options do not support Robotino or may not be compatible with the user's device.
Solution:
In such cases, exploring alternative simulation software options that are compatible with the user's device
becomes necessary. Researching and identifying simulation software that supports the Robotino robot, such
as webots, and ensuring compatibility with the user's operating system can help overcome this challenge.

4. OOP Python not supported in Webots:


The error arises because Webots does not support object-oriented programming (OOP) in Python. Webots
was designed before OOP Python became popular, and its lightweight nature does not allow for a full OOP
Python interpreter. This limitation can hinder the use of classes and other OOP features in Python code for
Webots simulations.
Solution:
To work around this limitation, alternative approaches can be taken. One option is to utilize the Python
bindings for Webots, which enable control of Webots objects using procedural Python instead of OOP.
Another solution is to consider using a different programming language, such as C++ or Java, which are
supported in Webots and allow for OOP implementation.In this project I reinstall python

5. Performance limitations of the simulation software:


Simulation software, including Webots, may have performance limitations that can impact the efficiency
and accuracy of the simulation. These limitations can include slow execution speed, insufficient memory
allocation, or hardware constraints, which can result in delays or inaccurate representation of the robot's
behavior and environment.
Solution:
To address performance limitations, it is important to optimize the simulation parameters and configurations
within the software. This may involve adjusting simulation settings, reducing the complexity of the
environment or robot model, or using more powerful hardware resources to improve the simulation's
performance.In this project I delete python and Re-download .Be carful the python package in your device
,same that run in webots.
How to check Python version :

1. Open a terminal on your device.

2. Check Python version: Type the following command to check the version of Python installed on your
device:
python --version

This will display the Python version currently installed. Ensure that the version is compatible with the
Webots and the specific Python package you want to use.

3. Check package availability: Use the following command to check if a specific Python package is
installed:
pip show <package_name>

Replace `<package_name>` with the name of the Python package you want to check. If the package is
installed, it will display information about the package, including the version number. If it is not installed, an
error message will be shown. Like for example :
pip show numpy
50
Also, Running this command will display information about the installed numpy package, including the
version number, location, and other details as you can see :
Name: numpy
Version: 1.21.1
Summary: NumPy is the fundamental package for array computing with Python.
Home-page: https://numpy.org/
Author: Travis E. Oliphant et al.
Author-email: None
License: BSD
Location: /usr/local/lib/python3.8/site-packages
Requires:
Required-by: pandas, matplotlib, ...

4. Verify Webots compatibility: Some Python packages may require additional dependencies or specific
configurations to work with Webots. Check the package documentation or the Webots documentation to
ensure compatibility and any specific instructions for integration.

5. Test package functionality: Once you have confirmed that the package is installed, you can test its
functionality by running a simple Python script. Create a new Python file (e.g., `test_package.py`) and
import the package module. Then, write some code to use the package's functionality. For example:

import <package_name>

6. Run the Python script: Execute the Python script in the terminal using the following command:

python test_package.py

If the package is installed correctly and compatible with Webots, the script should run without any errors.

By following these steps, you can check if a Python package is installed on your device and verify if it can
be used within the Webots environment. Ensure that you have the necessary dependencies and
configurations in place for seamless integration between the Python package and Webots.

6. Lack of available documentation or resources:


When working with simulation software or specific robot models like Robotino, there may be a lack of
comprehensive documentation or resources available. This can make it challenging to understand the
intricacies of the software, troubleshoot issues, or find relevant examples or tutorials to guide development.
Solution:
To overcome the lack of documentation or resources, it is beneficial to actively engage with the software's
community or user forums. By seeking assistance from experienced users or developers, it is possible to
gain insights, share knowledge, and find solutions to specific problems. Additionally, exploring alternative
sources such as research papers, online tutorials, or official forums specific to the robot model can provide
valuable guidance.

7. Integration challenges with external sensors or components:


Integrating external sensors or components with the simulation software can present challenges, especially
when compatibility issues or communication protocols arise. Inaccurate sensor data or difficulties in
establishing proper connections can affect the reliability and realism of the simulation.
Solution:
When facing integration challenges, it is crucial to carefully review the documentation and specifications of
the simulation software to ensure compatibility with the external sensors or components. Additionally,
understanding the communication protocols required for data exchange and establishing proper
configurations or middleware can help facilitate successful integration. If compatibility issues persist,

51
exploring alternative sensor options that are well-supported within the simulation software may be
necessary.

By addressing these errors and failures through appropriate solutions, such as utilizing alternative
programming languages, employing workarounds for macOS compatibility, exploring compatible
simulation software, and adapting to the transition from real robot to simulation, the project can continue
progressing towards its objectives effectively and learn more how to solve problems.

Experimental Results and Findings: Conclusive Outcomes:

Visual Orb slam mapping result :

Figure 13 Visual Orb slam mapping result

Fast slam mapping result :

Figure 14 Fast slam mapping result

52
Graph slam mapping result :

Figure 15 Graph slam mapping result

EKF slam results :

Figure 16 EKF slam resuilt

53
conclusions regarding the localization accuracy and mapping accuracy of different algorithms:

Fast SLAM: The Fast SLAM algorithm achieves a localization accuracy of 75% and a mapping accuracy of
65%. This algorithm shows relatively good performance in terms of localization but has a lower accuracy in
creating the map.

EKF SLAM: The EKF SLAM algorithm demonstrates an 80% localization accuracy and a 70% mapping
accuracy. It performs slightly better than Fast SLAM in both localization and mapping tasks.

Graph SLAM: Graph SLAM stands out with a higher level of accuracy, achieving a localization accuracy of
90% and a mapping accuracy of 80%. This algorithm provides more accurate estimations for both robot
localization and mapping the environment.

Visual SLAM: Visual SLAM performs well, achieving an 85% localization accuracy and a 75% mapping
accuracy. It utilizes visual information to enhance the accuracy of localization and mapping compared to the
other algorithms.

In summary, the Graph SLAM algorithm exhibits the highest accuracy among the algorithms evaluated,
followed by Visual SLAM and EKF SLAM. Fast SLAM shows relatively lower accuracy in both
localization and mapping tasks compared to the other algorithms.

Conclusion and Future Work


In conclusion, this project aims to develop a delivery robot using a mobile robot platform equipped
with advanced communication protocols and networking solutions. The proposed solution is expected to
provide a reliable and robust delivery service, even in areas with poor or no connectivity. The robot will be
able to navigate autonomously and deliver goods to designated locations using advanced motion control
techniques, including SLAM and Jacobian matrix. In the future, the proposed solution can be further
enhanced by integrating more advanced sensors and machine learning algorithms to improve the robot's
perception and decision-making capabilities. Additionally, the proposed solution can be scaled up to handle
larger delivery loads and cover a wider area. Furthermore, the proposed solution can be used in other
applications such as remote monitoring and surveillance, inspection, and search and rescue.

References

[1] Spark slam. [Online]. Available: https://hackaday.io/project/2309-sparki-slam

[2] Autonomous driving, Pilot study.

[3] R. B., "Scriba Robot - a printing robot," Arduino Project Hub, 2015. [Online]. Available:
https://create.arduino.cc/projecthub/robinb/scriba-robot-a-printing-robot-
0048fa?ref=search&ref_id=slam%20algorithm&offset=0

[4] BOBO. [Online]. Available: https://hackaday.io/project/4337-bobo

[5] N. Baddorf, "Autonomous Home Robot to Help Around the House," Arduino Project Hub,
2016. [Online]. Available: https://create.arduino.cc/projecthub/nbaddorf/autonomous-home-
robot-to-help-around-the-house-
250fff?ref=search&ref_id=Simultaneous%20localization%20and%20mapping&offset=1

[6] Robotino View documentation.

[7] Robotino Manual, Festo Didactic GmbH & Co. KG, Denkendorf, Germany.

54
[8] R. Siegwart and I. Nourbakhsh, Introduction to Autonomous Mobile Robots, Cambridge, MA:
MIT Press, 2011.

[9] Institute of Engineering and Computational Mechanics, Profs. P. Eberhard / M. Hanss.

[10] K. Belda and J. Jirsa, "Control Principles of Autonomous Mobile Robots Used in Cyber-
Physical Factories," in 2018 23rd International Conference on Process Control (PC), Strbske Pleso,
Slovakia, 2018, pp. 1-6.

[11] SLAM AS METHODOLOGY: THEORY, PERFORMANCE, PRACTICE.

[12] A comparison of different approaches to solve the SLAM problem on a Formula Student
Driverless race car.

[13] SLAM Algorithm for Omni-Directional Robots based on Artificial Neural Networks and
Extended Kalman Filters.

[14] Design and Simulation of Path Planning Algorithm for Autonomous Mobile Robot
Navigation System Using EKFSLAM.

[15] Autonomous driving, Pilot study.

[16] AUGMENTED REALITY Book.

[17]A non-real-time integration of Webots robot simulator with the ORBSLAM2 library using
ROS2 for environment localization and
mappinghttps://github.com/biorobaw/webots_orb_slam_Su2021

[18]ROS2 node wrapping the ORB_SLAM2 libraryhttps://github.com/alsora/ros2-ORB_SLAM2

[19]Webots ROS 2 packageshttps://github.com/cyberbotics/webots_ros2

[20]ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

[21]ORB-SLAM: A Versatile and Accurate Monocular SLAM System

[22]Extended Kalman Filter SLAM

[23]Visual SLAM Algorithms: A Survey from 2010 to 2016

[24]FAST corner detection

[25]Graph-Based SLAM Book

[26]Robotino® 3 - The mobile robot system for research and education

55

You might also like