0% found this document useful (0 votes)
21 views36 pages

Thesis Template

Uploaded by

Zerø
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views36 pages

Thesis Template

Uploaded by

Zerø
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Positioning Robot with Extended Kalman Filter

(EKF) and Adaptive Monte Carlo Localization


(AMCL)

by

Na Kimhoir

A thesis submitted in partial fulfillment of the requirements


for the degree of bachelor Degree

[Institute of Technology of Cambodia]


[01/07/2024]
Abstract

Your abstract goes here.

1
Acknowledgements

i
Contents

Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i

Contents ii

List of Figures iv

List of Tables v
1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Background of Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Objective of Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Scope of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 LITERATURE REVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1 Ros 2 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Robot Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Extended Kalman Filter (EKF) . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Adaptive Monte Carlo Localization (AMCL) . . . . . . . . . . . . . . . . 9
3 THEORETICAL BACKGROUND . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1 Robot Kinematic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4 SYSTEM OVERVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.1 Hardware Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5 EKF IMPLEMENTATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

ii
5.1 Simulation With Turtlebot3 . . . . . . . . . . . . . . . . . . . . . . . . . . 28

iii
List of Figures

1 The Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 ROS 2 introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Robot Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4 Extented Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5 The differential wheeled robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
6 Hardware diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
7 CanAble USB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
8 L12-20612 Series Servo Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
9 HFI IMU A9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
10 YDLidar G2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
11 Software architecture diagram depicting EKF, AMCL, and robot kinematics. . . . . 25
12 Robot Localization’s Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
13 Rviz2 visualization of IMU data. . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
14 Turtlebot3 Burger in Gazebo simulation environment. . . . . . . . . . . . . . . . . 28

iv
List of Tables

v
Nomenclature

AMCL Adaptive Monte Carlo Localization

EKF Extended Kalman Filter

GPS Global Positioning System

IMUs Inertial measurement units

LiDAR Light Detection and Ranging

ROS Robot Operating System

Ros2 Robot Operating System 2

vi
1 INTRODUCTION

1.1 Background of Project

The field of indoor robotics has grown rapidly in sectors such as manufacturing, offices, hospitals,
and warehouses, where robots perform a variety of jobs in environments with changing layouts and
few sensory inputs. Precise localization is essential for self-navigating and task implementation;
nevertheless, conventional techniques suffer in dynamic indoor situations devoid of GPS signals,
requiring new sensor technologies. Additionally, a meal delivery robot is being developed. And
this is a being develop food delivery robot.

Figure 1: The Robot

This project research develops a positioning system specifically for dynamic interior situations
in an effort to meet the difficulty of indoor robotic localization. Exact localization and navigation
are made possible by the system’s integration of the most popular sensor, probabilistic localization
methods, and state estimate. To estimate a robot’s posture in relation to a global map, methods
such as EKF and AMCL fuse sensor data to give robot’s position information accuracy.

1
1.2 Objective of Study

The objective of this study is to investigate and evaluate the effectiveness of the developed indoor
robotic positioning system in enabling accurate localization and autonomous navigation in dynamic
indoor environments. Specifically, the study aims to:

1. Assess the performance of the integrated sensor technologies, including LiDAR, wheel
encoders, and IMU, in providing accurate and reliable data for localization and navigation
tasks.

2. Evaluate the effectiveness of the implemented probabilistic localization algorithms, such as


EKF and AMCL, in estimating the robot’s pose relative to a global map and adapting to
changing environmental conditions.

3. Investigate the impact of integrating wheel odometry data on the overall accuracy and relia-
bility of the robotic positioning system, particularly in scenarios with dynamic motion and
environmental changes.

4. Analyze the system’s ability to utilize a global map for efficient navigation and obstacle
avoidance, considering various indoor environments and scenarios.

5. Validate the performance of the developed system through real-world experiments and sim-
ulations, comparing its performance against existing localization methods and assessing its
suitability for practical indoor robotic applications.

By achieving these objectives, this study aims to contribute to the advancement of indoor
robotics technology, enabling robots to operate autonomously and effectively in dynamic indoor
environments, thereby enhancing efficiency, and productivity across various indoor robotic appli-
cations.

1.3 Scope of Work

The scope of this project encompasses the design, development, implementation, and evaluation of
the indoor robotic positioning system within a simulated and real-world indoor environment. The
key components of the project include:

2
1. System Design: Designing the architecture of the indoor robotic positioning system, includ-
ing the integration of sensor technologies, localization algorithms, and navigation strategies.

2. Software Development: Developing the software components required for sensor data pro-
cessing, localization algorithms implementation, and autonomous navigation control within
the ROS 2 framework.

3. Sensor Integration: Integrating various sensor technologies such as LiDAR, wheel encoders,
and IMUs into the robotic platform and configuring them for accurate data acquisition.

4. Algorithm Implementation: Implementing probabilistic localization algorithms such as


EKF and AMCL to estimate the robot’s pose relative to a global map.

5. Testing and Evaluation: Conducting extensive testing and evaluation of the developed
system in simulated indoor environments using robotics simulation tools like Gazebo, as
well as in real-world indoor environments to validate its performance and reliability.

6. Performance Analysis: Analyzing the system’s performance metrics including localization


accuracy, navigation efficiency, obstacle avoidance capability, and computational efficiency.

7. Documentation and Reporting: Documenting the design, implementation, and evaluation


processes, and preparing comprehensive reports detailing the findings, conclusions, and
recommendations.

The scope of work outlined above aims to ensure the successful development and evaluation
of the indoor robotic positioning system, providing insights into its capabilities, limitations, and
potential applications in various indoor robotic scenarios.

1.4 Outline of the Thesis

This thesis is structured to provide a comprehensive overview of the design, development, imple-
mentation, and evaluation of the indoor robotic positioning system. The following chapters outline
the key components and stages of the project:

1. Introduction: This chapter introduces the background, objectives, scope, and outline of the
thesis.

3
2. Literature Review: This chapter reviews relevant literature on indoor robotics, localization
techniques, sensor technologies, and existing approaches to indoor robotic positioning.

3. System Overview: This chapter provides an overview of the indoor robotic positioning
system, including hardware components, software architecture, and integration within the
ROS 2 framework.

4. Methodology: This chapter outlines the methodology used in designing, developing, and
evaluating the indoor robotic positioning system, including system design, software develop-
ment, sensor integration, and testing procedures.

5. Results and Analysis: This chapter presents the results of testing and evaluation conducted
in simulated and real-world indoor environments, along with an analysis of the system’s
performance metrics.

6. Discussion: This chapter discusses the implications of the findings, identifies limitations of
the system, and proposes recommendations for future research and development.

7. Conclusion: This chapter summarizes the key findings of the thesis, discusses their signifi-
cance, and outlines potential avenues for further research in the field of indoor robotics.

Each chapter is designed to provide detailed insights into the different aspects of the indoor
robotic positioning system, contributing to a comprehensive understanding of its design, imple-
mentation, and performance.

2 LITERATURE REVIEW

2.1 Ros 2 Framework

Ros 2 is an open-source framework designed for building robot applications. It’s essentially a
collection of software tools and libraries that simplify the process of creating robot software. As
shown in Figure 2 ,it shows the basic parts of Ros 2.

4
Figure 2: ROS 2 introduction

Package: Within the context of robotics development, a package functions as an organizational


unit for managing codebase components. It offers a structured approach to code organization,
simplifying installation, sharing, and collaboration among developers.
Nodes: Nodes represent individual executable processes crucial for communication within the
ROS architecture. These nodes play a pivotal role in robotic systems, executing specific tasks and
exchanging information with other nodes.
Action: Actions denote a communication pattern essential for managing long-running, goal-
oriented tasks among nodes in a robotic system. Unlike simple message passing, actions support
asynchronous execution and feedback exchange, enabling the handling of complex robotic behav-
iors.
Topic: Topics serve as communication channels facilitating message exchange between nodes
in a robotic system. Nodes can publish messages to topics or subscribe to receive messages,
enabling seamless data exchange and coordination.
Message: Messages represent structured data types utilized for inter-node communication
in robotic systems. Messages are integral to communication and decision-making processes in
ROS-based robotic applications.
Service: Services embody a communication pattern where one node requests specific actions
or information from another node, eliciting a synchronous response. Within robotic systems,
services facilitate client-server interactions, allowing nodes to offer functionalities such as sensor

5
data querying, computation task requests, or execution of actions based on received requests.
Parameter: Parameters denote dynamic values adjustable at runtime, offering a flexible mech-
anism for configuring node behavior and tuning system parameters without necessitating code
changes or recompilation.

2.2 Robot Localization

An essential component of autonomous robotic systems is robot localization, which gives them
the ability to know their position and orientation in a given environment. In order to estimate
the robot’s posture in relation to a specified reference frame, computer techniques and sensor data
are usually integrated. To tackle this problem, several strategies have been created, ranging from
straightforward odometry-based procedures to more complex probabilistic methods.

Figure 3: Robot Localization

Traditional methods of robot localization often rely on odometry, which estimates the robot’s
position based on the movement of its wheels or other motion sensors. While odometry provides
real-time estimates, it is prone to cumulative errors and drift over time, particularly in environments
with uneven terrain or wheel slippage.
Probabilistic localization techniques, such as Kalman Filters and Particle Filters, have gained
popularity due to their ability to incorporate uncertain sensor measurements and motion models
into a probabilistic framework. Kalman Filters, including the Extended Kalman Filter (EKF), are
widely used for state estimation in robotics, especially when dealing with linear or moderately
nonlinear systems. However, their effectiveness may diminish in highly nonlinear environments or
with complex sensor data.
Adaptive Monte Carlo Localization (AMCL) is an advanced probabilistic localization technique

6
based on Particle Filters, specifically designed to handle non-linear and multimodal state estimation
problems. AMCL maintains a set of weighted particles representing the robot’s possible poses and
updates them using sensor measurements and motion models. Through resampling and importance
weighting, AMCL effectively converges to the robot’s true pose even in challenging environments
with sensor noise and uncertainty.
Recent advancements in robot localization research have focused on sensor fusion techniques,
integrating data from multiple sensors such as Inertial Measurement Units (IMUs), wheel encoders,
and LiDAR sensors. By combining complementary information from different sensors, these
approaches aim to improve localization accuracy, robustness, and resilience to environmental
disturbances.
Overall, the field of robot localization continues to evolve, with ongoing efforts to develop
more accurate, efficient, and robust techniques for autonomous navigation and mapping in complex
real-world environments.

2.3 Extended Kalman Filter (EKF)

The Extended Kalman Filter (EKF) is a widely used recursive Bayesian estimator for state estimation
in nonlinear dynamic systems. It is an extension of the traditional Kalman Filter (KF), which is
applicable only to linear systems. EKF approximates the state estimation problem in nonlinear
systems by linearizing the system dynamics and sensor models around the current estimate, thus
enabling the application of the standard Kalman Filter algorithm.

2.3.1 Algorithm Overview

The EKF algorithm consists of two main steps: the prediction step (also known as the time update)
and the correction step (also known as the measurement update). In the prediction step, the current
state estimate is propagated forward in time using the system dynamics model. Meanwhile, in the
correction step, sensor measurements such as those from IMU and wheel encoders (odometry) are
incorporated to update the state estimate, resulting in improved accuracy and reduced uncertainty.

7
2.3.2 Mathematical Formulation

Let x denote the state vector of the system, u denote the control input, z denote the sensor
measurements, F denote the state transition matrix, Q denote the process noise covariance matrix,
H denote the measurement matrix, and R denote the measurement noise covariance matrix. The
EKF algorithm can be summarized as follows:

Figure 4: Extented Kalman Filter

Prediction Step:

x̂− = F(x̂, u)

P− = FPFT + Q

Correction Step:

K = P− HT (HP− HT + R)−1

x̂ = x̂− + K(z − Hx̂− )

P = (I − KH)P−

Where x̂− and P− represent the predicted state estimate and covariance matrix, respectively,
and x̂ and P represent the corrected state estimate and covariance matrix, respectively.

2.3.3 Applications in Robotics

EKF has found extensive applications in robotics, particularly in localization and mapping tasks. By
leveraging EKF for state estimation and incorporating sensor measurements from IMU and wheel

8
encoders (odometry), robots can navigate and localize themselves accurately even in complex and
dynamic environments, making it a cornerstone algorithm in robotics research and development.

2.4 Adaptive Monte Carlo Localization (AMCL)

Adaptive Monte Carlo Localization (AMCL) is a probabilistic localization algorithm widely used
in robotics for estimating the position and orientation of a robot within an environment. AMCL is
particularly suitable for dynamic environments and robots with non-linear motion models.

2.4.1 Algorithm Overview

AMCL is based on the Monte Carlo Localization (MCL) method, which represents the robot’s
belief about its pose as a set of weighted particles. Unlike MCL, which uses a fixed number of
particles, AMCL adaptively adjusts the number of particles based on the robot’s uncertainty and the
complexity of the environment. This adaptive approach improves computational efficiency while
maintaining localization accuracy.

2.4.2 Localization Process

1. Initialization: AMCL initializes a set of particles representing possible robot poses uni-
formly distributed across the map.

2. Prediction: In the prediction step, the particles are propagated forward in time using the
robot’s motion model. This accounts for the robot’s motion uncertainty and updates the
particle poses accordingly.

3. Measurement Update: Upon receiving sensor measurements, such as those from a laser
range finder (LiDAR) or other sensors, AMCL weights the particles based on their likelihood
to generate the observed measurements. The weights are computed using a measurement
model that compares the expected sensor readings at each particle pose with the actual sensor
measurements.

4. Resampling: To prevent particle depletion and maintain diversity in the particle set, AMCL
performs resampling based on the particle weights. Particles with higher weights are more

9
likely to be replicated, while particles with lower weights are discarded. This process ensures
that the particle set accurately represents the posterior belief about the robot’s pose.

2.4.3 Applications in Robotics

AMCL has a wide range of applications in robotics, including localization and navigation tasks. It
is commonly used in conjunction with other sensors such as wheel encoders, IMUs, and LiDAR
sensors to provide accurate and robust localization in indoor and outdoor environments. AMCL en-
ables robots to localize themselves accurately even in challenging scenarios with dynamic obstacles
and changing environmental conditions.

3 THEORETICAL BACKGROUND

3.1 Robot Kinematic Model

The differential wheeled robot is a mobile robot whose movement is based on two separately driven
wheels placed on either side of the robot body. It can thus change its direction by varying the
relative rate of rotation of its wheels and hence does not require an additional steering motion.
Robots with such a drive typically have one or more castor wheels to prevent the vehicle from
tilting.

Figure 5: The differential wheeled robot

r(vr + vl )
V =
2
r(vr − vl )
ω=
L

10
Where:

V is the linear velocity of the robot.

r is the radius of the wheels.

vl is the angular velocity of the left wheel.

vr is the angular velocity of the right wheel.

ω is the angular velocity of the robot.

L is the distance between the wheels (wheelbase).

So we can have the forward kinematic of robot


    
1 1
V v
2   r
  = r · 2
1
ω L
− L1 vl

The inverse kinematics of the robot can be represented as:


    
L
v 2+ 2 V
 r = 1 ·   
vl r 2 − L2 ω

3.2 Extended Kalman Filter

3.2.1 State and Input Vector

The state vector at time t is defined as xt = [xt , yt , ϕt , vt , ωt ], where:

• x, y: 2D position,

• ϕ: orientation,

• v: velocity, and

• ω: rotation velocity.

The input vector is denoted as ut = [vt , ωt ], where:

• v: velocity, and

11
• ω: rotation velocity.

3.2.2 Motion Model

The robot model is

ẋ = v cos(ϕ)

ẏ = v sin(ϕ)

ϕ̇ = ω

So, the motion model is xt+1 = f (xt , ut ) = F xt + But where

   
1 0 0 0 0 cos(ϕ)∆t 0
   
0 1 0 0 0  sin(ϕ)∆t 0 
   
   
F = 0 B=
   
0 1 0 0 0 ∆t
   
   
0 0 0 0 0  1 0
   
0 0 0 0 0 0 1

∆t is a time interval.

The motion function is that


   
x′ x + v cos(ϕ)∆t
   
 ′
y   y + v sin(ϕ)∆t 
 
   
 ′
w  = f (x, u) =  ϕ + ω∆t 
 
   
 ′  
v   v 
   

ω ω

12
Its Jacobian matrix is
   
∂x′ ∂x′ ∂x′ ∂x′ ∂x′
1 0 −v sin(ϕ)∆t cos(ϕ)∆t 0
 ∂x ∂y ∂ϕ ∂v ∂ω   
∂y ′ ∂y ′ ∂y ′ ∂y ′ ∂y ′
 0 1 −v cos(ϕ)∆t sin(ϕ)∆t 0
   
 ∂x ∂y ∂ϕ ∂v ∂ω 
  
 ′ ∂ϕ′ ∂ϕ′ ∂ϕ′ ∂ϕ′  = 
Jf =  ∂ϕ

 0 0 1 0 ∆t
 ∂x ∂y ∂ϕ ∂v ∂ω   
 ∂v′ ∂v ′ ∂v ′ ∂v ′ ′
∂v   
 ∂x ∂y ∂ϕ ∂v ∂ω 
 0 0 0 1 0
  
∂ω ′ ∂ω ′ ∂ω ′ ∂ω ′ ∂ω ′
∂x ∂y ∂ϕ ∂v ∂ω
0 0 0 0 1

3.2.3 Observation Model

Wheel Encoder
The kinematics of a two-wheel robot can be described by the following equations:

f = R(ϕ) × Jf × u

Where the matrices and vectors representing the Jacobian matrix, input vector, and output
vector, respectively are
   
r
2
− 2r   ẋ
  Vr  
Jf =  0 0  , u =   , f =  ẏ  (1)
   
  Vl  
− 2lr r
− 2l ϕ̇

and The rotation matrix


 
cos(ϕ) − sin(ϕ) 0
 
R(ϕ) =  sin(ϕ) cos(ϕ) 0 (2)
 
 
0 0 1

by using Euler-discretization we can use the Encoder Tick from the wheels to calculate the x,y,
and ϕ of the robot

Xt+1 = f (Xt ) · u + Xt

13
where
 
2π∆Tick 2π RTick,t+1 − RTick,t 
u= =
PPR PPR LTick,t+1 − LTick,t

and
 
x
 t
Xt =  yt 
 
 
ϕt

∆Tick represents the change in tick count, and PPR is the pulses per revolution.

The robot can get x, y, and ϕ position information from Euler-discretization of the Wheel
Encoder. So, the Odom Observation model is

zodom = godom (xt ) = Hodom xt

where  
1 0 0 0 0
 
Hodom = 0 1 0 0 0
 
 
0 0 1 0 0

The observation function states that


   
x′ x
   
 ′
 y  = godom (x) =  y 
 
   

ϕ ϕ

Its Jacobian matrix is


   
∂x′ ∂x′ ∂x′ ∂x′ ∂x′
1 0 0 0 0
 ∂x ∂y ∂ϕ ∂v ∂ω   
 ′ ∂y ′ ∂y ′ ∂y ′ ′
Jodom =  ∂y ∂y  =  
0 1 0 0 0
 ∂x ∂y ∂ϕ ∂v ∂ω 
 
 
∂ϕ′ ∂ϕ′ ∂ϕ′ ∂ϕ′ ∂ϕ′
∂x ∂y ∂ϕ ∂v ∂ω
0 0 1 0 0

14
IMU
The robot can also get ϕ information from the IMU Sensor. So the IMU Observation model is

zIM U = gIM U (xt ) = HIM U xt

where
h i
HIM U = 0 0 1 0 0

The observation function states that

h i h i
ϕ′ = gIM U (x) = ϕ

Its Jacobian matrix is

h i h i
JIM U = ∂ϕ′ ∂ϕ′ ∂ϕ′ ∂ϕ′ ∂ϕ′ = 0 0 1 0 0
∂x ∂y ∂ϕ ∂v ∂ω

3.2.4 Extended Kalman Filter

The localization process using Extended Kalman Filter (EKF) is as follows:


Predict

xPred = F xt + But

PPred = Jf Pt JfT + Q

15
Update

zPred = HxPred

y = z − zPred

S = Jg PPred JgT + R

K = PPred JgT S −1

xt+1 = xPred + Ky

Pt+1 = (I − KJg )PPred

where
     
zodom Hodom Jodom
z= , H= , Jf =  
zIMU HIMU JIMU

And The matrices Q and R are diagonal matrices:

   
Qodom .. rv 0
Q= , R= 
.. QIM U 0 rω

4 SYSTEM OVERVIEW

This chapter details the comprehensive system architecture employed in the positioning of robots
using the Extended Kalman Filter (EKF) and Adaptive Monte Carlo Localization (AMCL) tech-
niques within the ROS 2 framework. The section is divided into discussions of the hardware
components and the software architecture, explaining how these elements integrate to create a
robust system capable of accurate localization and efficient navigation.

16
4.1 Hardware Components

The hardware architecture is designed to robustly support the advanced data processing and in-
tegration necessary for precise robot localization in dynamic environments. Key components
include:

Figure 6: Hardware diagram

• Mini-PC: Acts as the central processing unit with ample computational power to process
data from sensors and execute localization algorithms in real-time.

• CANable USB: A versatile USB to CAN adapter that allows the Mini-PC to interface
effectively with CAN devices, ensuring reliable communication within the network essential
for industrial applications.

• L12-20612 Series Servo Drives: High-performance motor controllers that offer precise con-
trol over the robot’s brushed DC motors, significantly enhancing the accuracy and reliability
of the robot’s movement.

• HFI IMU A9: An advanced Inertial Measurement Unit critical for providing precise mea-
surements of the robot’s orientation and acceleration, which are fundamental for accurate
navigation.

• YDLidar G2: A 2D laser range finder essential for environmental scanning and mapping,
providing the data necessary for the robot’s localization and navigation systems.

17
4.1.1 Canable USB

The Canable USB serves as a compact and versatile USB to CAN (Controller Area Network) adapter
that enables seamless communication between a host computer and various CAN devices. This
adapter is instrumental in interfacing the robotic system’s Mini-PC with the CAN network, which
is crucial for managing communications in automotive and industrial automation applications.

Figure 7: CanAble USB

Functionality The Canable USB is designed for high reliability and easy integration into existing
systems. It operates with open-source firmware, which allows for extensive customization and
adaptability to specific project requirements. The device supports both standard and extended CAN
frames, enhancing its utility in a broader range of applications.

Integration and Usage In the context of the robotic system, the Canable USB facilitates the
transmission of telemetry data and control commands across the system’s network, linking actuators,
sensors, and the central processing unit. This setup is critical for real-time performance monitoring
and control, ensuring that all components operate synchronously and efficiently.

Technical Specifications

• Connectivity: Standard USB interface for connection to the Mini-PC and CAN connectors
for linking with CAN bus devices.

18
• Compatibility: Supports a broad range of operating systems and is compatible with various
CAN analysis and development tools.

• Data Rate: Capable of handling high data rates typical in CAN networks, ensuring minimal
latency and high throughput.

Advantages The Canable USB’s plug-and-play capability, combined with its robustness and
support for multiple CAN protocols, makes it an invaluable component in the robotic system.
It not only simplifies the hardware setup but also enhances the system’s overall reliability and
functionality.

4.1.2 L12-20612 Series Servo Drives

The L12-20612 Series Servo Drives are integral components of the robotic system, offering high-
performance motor control for precise and efficient handling of the robot’s locomotion. These
servo drives are specifically chosen for their robustness and their ability to provide fine-grained
control over the brushed DC motors used in our robotic platform.

Figure 8: L12-20612 Series Servo Drives

Core Functionality These servo drives facilitate precise speed and position control, which is
vital for the accurate maneuvering and positioning of the robot in dynamic environments. They

19
come equipped with advanced features such as programmable PID parameters, which allow for the
tuning of the controller to achieve optimal dynamic response and stability of the motor system.

Technical Specifications

• Voltage Compatibility: Operates across a wide voltage range, accommodating the electrical
system specifications of most robotic platforms.

• Current Handling: Capable of supporting peak currents necessary for high torque opera-
tions, making them suitable for high-load applications.

• Interface Options: Includes support for multiple control interfaces such as analog, PWM,
and digital, providing flexibility in how they are integrated and controlled within the system.

• Safety Features: Built-in overcurrent, overvoltage, and thermal protection safeguards ensure
that the servo drives operate safely under all conditions.

Implementation and Integration In our robotic system, the L12-20612 Series Servo Drives are
configured to work in tandem with the system’s central control unit via CAN bus, allowing for
coordinated control and feedback across the system’s motor network. This setup enhances the
robot’s ability to perform complex navigation and manipulation tasks with high precision.

Benefits and Impact The inclusion of L12-20612 Series Servo Drives significantly enhances
the operational capabilities of our robot. Their high precision and reliability improve the robot’s
performance in tasks that require detailed positional accuracy and fine motor control, crucial in
research and industrial applications where precision and reliability are paramount.

4.1.3 Inertial Measurement Unit: HFI IMU A9

The HFI IMU A9, developed by HFI Electronics Technology Co., Ltd., is an advanced Inertial
Measurement Unit critical to the robotic system for accurate real-time orientation, acceleration, and
angular velocity sensing. This high-performance sensor is fundamental for the dynamic control
and stabilization of the robot.

20
Figure 9: HFI IMU A9

Functional Capabilities The IMU A9 is instrumental in providing high-precision inertial mea-


surements that help in the real-time localization and navigation of the robot. By accurately
measuring the linear acceleration and angular rates, the IMU plays a vital role in correcting drifts
from wheel encoders and enhancing the robot’s trajectory planning and motion control.

Technical Specifications

• Sensor Types: Incorporates tri-axial gyroscopes and accelerometers to provide comprehen-


sive motion tracking across all axes.

• Measurement Range: Gyroscopes with a range of ±2000 degrees per second and accelerom-
eters with a range of ±16g, optimized for dynamic environments.

• Output Data Rate: Configurable data output rates from 10 Hz to 4 kHz, allowing adaptation
based on the specific requirements of the task and processing capabilities.

• Communication Interface: Supports I2C, SPI, and UART protocols for flexible and reliable
data transmission.

21
• Calibration: Pre-calibrated for biases and scale factors, with options for in-field recalibration
to maintain accuracy over its operational life.

Integration with ROS 2 The HFI IMU A9 is seamlessly integrated into the ROS 2 framework
of the robotic system. Data from the IMU is utilized by various ROS 2 nodes for tasks such as
sensor fusion, where it is combined with LiDAR and odometry data to enhance the localization and
mapping algorithms.

Advantages for Robotic Systems The integration of the HFI IMU A9 offers substantial benefits:

• Enhanced Navigation Accuracy: Improves the navigation accuracy of the robot by providing
stable and reliable inertial data, which is crucial for environments where GPS is unavailable
or unreliable.

• Dynamic Motion Control: Enables advanced motion control strategies that can dynamically
adjust based on the robot’s immediate physical environment, thus improving operational
effectiveness and safety.

• Robustness in Varied Conditions: Performs reliably under varying environmental condi-


tions, making it suitable for both indoor and outdoor robotic applications.

This IMU’s high fidelity and precision make it an indispensable component of the robot’s
sensory apparatus, directly contributing to the system’s overall robustness and reliability.

4.1.4 YDLidar G2

The YDLidar G2 is a compact and cost-effective 2D laser range finder widely utilized in robotic
applications for its ability to provide high-resolution spatial mapping. This sensor is integral to
the environmental perception capabilities of our robotic system, enabling precise localization and
navigation by generating detailed maps of the surroundings.

Functional Capabilities The YDLidar G2 operates on the principle of time-of-flight, emitting


laser beams and measuring the time it takes for them to return after reflecting off objects. This
method allows for accurate distance measurements, which are crucial for mapping and obstacle
detection.

22
Figure 10: YDLidar G2

Technical Specifications

• Range: Capable of detecting objects up to 12 meters away, making it suitable for both indoor
and outdoor applications.

• Angular Resolution: Provides a high angular resolution of 0.5 degrees, which enables the
creation of detailed and accurate environmental maps.

• Scan Rate: Features a high scan rate of up to 8,000 times per second, facilitating real-time
mapping and navigation.

• Field of View: Offers a 360-degree field of view, ensuring comprehensive coverage and
awareness of the robot’s surroundings.

• Connectivity: Supports USB and TTL serial connections, allowing for easy integration with
various computing platforms, including the Mini-PC used in our system.

Integration with ROS 2 Within the ROS 2 framework, the YDLidar G2 is seamlessly integrated
to provide real-time data for various subsystems. The lidar data is processed alongside inputs
from other sensors like IMUs and wheel encoders to enhance the accuracy of the localization

23
and navigation modules. This integration is crucial for the implementation of complex robotic
behaviors such as dynamic path planning and obstacle avoidance.

Benefits to the Robotic System The YDLidar G2 enhances our robotic system by:

• Enabling Precise Navigation: By providing detailed spatial data, it aids in the robot’s ability
to navigate through complex environments with high precision.

• Improving Safety: The sensor’s rapid response time and comprehensive environmental
coverage significantly enhance the robot’s ability to detect and respond to obstacles, thereby
increasing operational safety.

• Facilitating Advanced Applications: The detailed environmental models generated by the


YDLidar G2 are fundamental for applications requiring advanced spatial awareness and
decision-making capabilities.

The integration of the YDLidar G2 into our system exemplifies the use of advanced sensing
technology to significantly improve the functionality and safety of robotic platforms in real-world
applications.

4.2 Software Architecture

The software architecture of our robotic system, incorporating Extended Kalman Filter (EKF) and
Adaptive Monte Carlo Localization (AMCL) in ROS 2, integrates sensor data processing, EKF
localization, AMCL for robust localization, and robot kinematics.

4.2.1 ROS 2 and Workspace Setup

For this project, a Mini PC running Ubuntu OS, referred to as Ros 2 Humble, serves as the central
controller for all robot tasks. The project is organized within a workspace, facilitating efficient
management of resources and dependencies. The following diagram illustrates the packages utilized
for the robot positioning system:
The workspace setup involves the following packages:

24
Figure 11: Software architecture diagram depicting EKF, AMCL, and robot kinematics.

• robot can: This package provides nodes responsible for motor driver interaction, feedback
data reading, control data transmission, and message transformation into ROS 2 format.

• robot wheel odom: Interfaces with the robot’s kinematics, converting wheel ticks into odom-
etry data using forward kinematics and Euler discretization methods.

• robot ekf : Implements nodes for robot localization utilizing an enhanced Kalman filter,
ensuring accurate and efficient localization.

• robot amc: Incorporates nodes for adaptive Monte Carlo localization, leveraging LiDAR
data for precise robot localization.

• robot imu: Contains nodes for reading IMU data via Serial USB and translating it into ROS
2 messages, providing crucial sensor information for localization and navigation.

• robot launch: Offers launch files for essential Python scripts, facilitating seamless execution
of localization and mapping tasks.

25
Figure 12: Robot Localization’s Packages

4.2.2 Robot’s Odometry Implementation

This section details the implementation of odometry for the robot using the robot wheel odom
package. Within this package, a dedicated node is employed to subscribe to pertinent data streams,
notably the tick information provided by the robot can package’s node. Leveraging this data,
the node computes essential metrics pertaining to the robot’s localization, including its Cartesian
coordinates (X and Y) and rotational orientation (yaw). These calculations are facilitated through the
application of forward kinematics principles and Euler discretization techniques. Once computed,
the resulting odometric data is disseminated through the system via publication on relevant topics.
To facilitate robot control and induce odometric data generation, the following command
sequence is executed:

ros2 run teleop twist keyboard teleop twist keyboard

This command invocation initializes a teleoperation node, affording manual control over the
robot’s locomotion. Subsequently, to visualize the produced odometric data, the /odom topic is
echoed, thus:

ros2 topic echo /odom

This command enables the observation of published odometric data, facilitating the verification
of the robot’s movement dynamics and spatial localization.

26
4.2.3 Robot’s IMU Implementation

In this section, the implementation of the Inertial Measurement Unit (IMU) for the robot is discussed,
utilizing the robot imu package. This package encompasses two nodes designed to facilitate IMU
functionality. The first node handles the task of reading data from the IMU via serial communication
and subsequently publishing the data for consumption by other nodes within the system. The second
node is dedicated to displaying yaw data, providing a means to confirm the rotational angle of the
robot around the vertical axis.
To illustrate the IMU implementation, Figure 13 depicts the configuration of RViz2, a visual-
ization tool commonly used in ROS environments, to visualize the IMU data.

Figure 13: Rviz2 visualization of IMU data.

In Figure 13, the yaw data is prominently displayed, offering visual confirmation of the robot’s
rotational orientation. This visualization aids in validating the accuracy of the IMU readings and
ensures proper functionality within the robotic system.

27
5 EKF IMPLEMENTATION

5.1 Simulation With Turtlebot3

For this section, I utilized Gazebo to implement an Extended Kalman Filter (EKF) with the
Turtlebot3 Burger robot model.

5.1.1 Setup

To begin, you’ll need to install the necessary ROS packages. Execute the following command in
your terminal:

sudo apt install ros-foxy-turtlebot3*

Once the packages are installed, you can launch the robot simulation using the following
command:

ros2 launch turtlebot3_gazebo empty_world.launch.py

This command will initialize the Gazebo simulator with the Turtlebot3 Burger robot model.
Below is a visual representation of the robot in Gazebo:

Figure 14: Turtlebot3 Burger in Gazebo simulation environment.

28

You might also like