0% found this document useful (0 votes)
14 views18 pages

Final Report

The project focuses on developing an algorithm for tracking an autonomous robot using various sensors, including an IMU and a camera. It involves sensor modeling, localization using QR codes, and dynamic tracking through filtering algorithms like the Extended Kalman Filter (EKF). The final results demonstrate effective tracking and localization of the robot's position through data fusion techniques.

Uploaded by

jinyaoz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views18 pages

Final Report

The project focuses on developing an algorithm for tracking an autonomous robot using various sensors, including an IMU and a camera. It involves sensor modeling, localization using QR codes, and dynamic tracking through filtering algorithms like the Extended Kalman Filter (EKF). The final results demonstrate effective tracking and localization of the robot's position through data fusion techniques.

Uploaded by

jinyaoz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Aalto University, School of Electrical Engineering

ELEC-E8740 - Basics of sensor fusion

Final Report

Basics of Sensor Fusion Project


Group 21

Date: 10.01.2021

Zhixin Cui 882781


Jingsheng Chen 914837
Yun Hua 914730
Table of Contents

1. Abstract 2
2. Brief introduction 2
3. Part Ⅰ: Sensor modeling 3
3.1 Task One 3
Task 1a. 3
Task 1b. 5
Task 1c. 5
3.2 Task Two 6
Task 2a. 6
Task 2b. 8
3.3 Task Three 8
Task 3a. 8
3.4 Task Four 9
4. Part Ⅱ 10
4.1 Task five 10
Task 5a. 10
Task 5b. 10
5. Part Ⅲ 13
5.1 Task six 13
Task 6a. 13
Task 6b. 15
Task 6c. 15
5.2 Task seven 16
6.Conclusion 17

1
1. Abstract
The aim of this project is to develop an algorithm for tracking an autonomous robot by using
a set of sensors, and the main work is doing processing and sensor fusion of the data. The
robot is equipped with an inertial measurement unit (IMU), an infra-red detector, a motor
controller, and a camera module. The IMU can record the acceleration and angular velocity
information of the robot, and the camera can be positioned by scanning the QR code. By
using the filtering algorithm to fuse the IMU and camera data, we can complete the dynamic
tracking of the robot's position.
Keyword: Autonomous robot, Sensor fusion, IMU, QR code, camera

2. Brief introduction
The main objective of this project is to develop a robot tracking system given a
DiddyBorg rover-type robot with some sensors. The robot is equipped with an inertial
measurement unit (IMU), which is a combination of accelerometer, gyroscope, and
magnetometer. In addition to the IMU, the robot is also equipped with an infra-red
detector, a motor controller, and a camera module.
The IMU will measure the acceleration as well as the angular rate of the robot from
three orthogonal body axis. The accelerometer measurements and the gyroscope
measurements (angular velocity) from IMU are combined to obtain the acceleration in
the inertial frame. The velocity and the position of the robot are then readily obtained
by integration of the acceleration, and it can be also given by the motor directly. The
camera system will detect several predefined rectangles that contain unique QR codes
with known positions.
The first part of this project is developing a sensor model with estimated parameters.
In the second part of this project, the derivated sensor model will be combined with a
sequential estimation algorithm in order to estimate the position and attitude of the robot.
In the third part, we will develop a tracking algorithm and fuse the IMU and camera
data to obtain our final robot tracking system.

2
3. Part Ⅰ: Sensor modeling
3.1 Task One
Task 1a. One row in the dataset means the detected data in one timestep. The
dataset contains 12 columns. The first column is system time. The second to
the fourth column is the accelerator g force data. The fifth to the sixth column
is the accelerator angles data. The seventh to the ninth columns is the
gyroscope angular velocity data. And the tenth to the twelfth column is the
magnetometer data.

Figure 1. One sight of the data

Figure 2. Accelerator g force data

3
Figure 3. Accelerator angles data

Figure 4. Gyroscope angular velocity data

4
Figure 5. Magnetometer data

The data shown above indicates that the sample time is about 0.06 seconds. All
sensors’ raw data is relatively small, and it is possible to extract the columns of the
data to do further data processing.

Task 1b. Assume gyroscope data were read in the order of x-axis, y-axis, z-
axis, then the bias of x-axis is 0.008983018018018006. The bias of y-axis is
0.016876891891891902, and the bias of z-axis is -0.0012995945945945963.
Task 1c. The variance matrix of measurement noise is showed as below:

Figure 6. variance matrix of the IMU measurement noise

5
3.2 Task Two
Task 2a. Plots of data are shown below:

Figure 7. Accelerator of g force in z-axis

Figure 8. Accelerator of g force in x-axis

6
Figure 9. Accelerator of g force in y-axis

Through these plots of data, three facts are observed.


First is that robot were rotated 5 times and the data gives 6 measurement of gravity in
6 different directions.
Second is that there is bias in the measurement result and it may be caused by XYZ
axis of the robot is not exactly same with the IMU’s.
Third is that, according to the order provided by Readme.txt, +y direction appear
earlier than -y direction, but the IMU reading is minus at first, this mean that robot’s
y-axis is reverse to IMU’s y axis. (But here comes a question, as showed in the figure,
the robot’s coordinate is a right-hand coordinate, which means that coordinate of IMU
is not right hand coordinate.)

The gains and biases for each body axis are showed below:

Figure 10. Gains for x, y, z-axis

Figure 11. Bias for x, y, z-axis

7
The reason why Ky and By is minus has been argued in the third fact.

Task 2b. The difference of the true value and the estimated value are given in
figure 12. Note that the result is in the same order as Readme.txt showed.

Figure 12. Difference of the true and estimated value

It is not totally the same as what we expect. There is still bias in the data. It could be
better with further calibration.

3.3 Task Three


Task 3a. The measurement of distance and pixels in the terminal can be
visualized in the figure below.

Figure 13. Relationship of distance and height of QR-code

The result of gradient and bias are shown below.

8
Figure 14. Gradient and bias

Task 3b. The focal length in pixel can be calculated from the following
equation.

3.4 Task Four


The first column in the file is the total distance in cm. The second column is the
time interval of each 40cm in second. Therefore, the speed can be calculated by
40cm divided by time interval. The result is shown in Table 1.

Distance(cm) Time interval(s) Speed(cm/s)

40 3.08 12.98701299

80 6.59 6.069802731

120 6.54 6.116207951

160 6.89 5.805515239

200 6.29 6.359300477

240 6.48 6.172839506

280 6.64 6.024096386


Table 1. Speed Calculation

9
4. Part Ⅱ
4.1 Task five
Task 5a
In task 5 we need to localize the robot, i.e. to calculate three parameters: 𝑝 𝑥 , 𝑝 𝑦 , ψ,
which are the position in x and y coordinate and orientation of the robot.
By measuring each QR code we can get two equations, which formed 𝑔(𝑥) in the
measurement model:

Therefore, we need at least three equations to calculate three unknowns, which means
the minimum number of different QR-codes is 2.

Task 5b
We have the positions of multiple QR codes measured by the camera and the global
position of the QR codes. To calculate global coordinate of robot, we used the Gauss-
Newton method to process multiple sets of data collected by the camera, and obtained
a relatively accurate position through multiple iterations.

Figure 15. Gauss–Newton Algorithm

The Jacobin matrix:

10
Figure 16. Jacobin matrix of g(x)

The values of 𝑝 𝑥 , 𝑝 𝑦 , ψ, gradually converge to the smallest value of the loss function
through the Gauss Newton algorithm:

̂
̂𝑥 , 𝑝̂𝑦 , ψ
Figure 17. converge result of 𝑝

And final result:


𝑝̂𝑥 =60.8

11
𝑝̂𝑦 = 38.91
̂ = 89.43
ψ

Which is very close to the real global position of robot:


𝑝 𝑥 = 60
𝑝 𝑦 = 39
ψ = 90

And the positions of robot are as follow:

Figure 18. result position of robot

12
5.Part Ⅲ
5.1 Task six
Task 6a
In Task 6, we choose Quasi-Constant Turn Model as our dynamic model to
model the motion of the robot:

The state is now:

Figure 19. Quasi-Constant Turn Model

13
And then, as the equation is nonlinear, we doing the discretization by using Euler–
Maruyama discretization:

(a)

(b)

(c)
Figure 19. Model discretization

14
Task 6b&c
Then we do the dead-reckoning in python based on speed measurements from the
motor control and turn rate measurements from the gyroscope. The results are:

Figure 20. Result of dead-reckoning


Compare with the track in Figure6 of project guide, it looks similar.

Figure 21. Figure6 of project guide

15
5.2 Task seven
The IMU sensors alone may not be sufficient to perform localization accurately.
Therefore, we need to incorporate camera measurements into the filtering algorithm.
In this task, we choose EKF filter to do the fuse of the camera and IMU data.

Figure 22. Extended Kalman Filter

We choose the point when all of the IMU, motor and camera’s timestep star to
collect data as start point, and use IMU and motor data as model prediction and use
camera data as measurement update.
The results are as follow:

Figure 23. Result of EKF

16
6.Conclusion
In this project work, firstly, we are familiar with the calibration, correction of
deviation and use of sensor modules such as IMU and camera. In the second part, we
completed the static localization of the robot by using the camera to read the QR code.
After that, we used separate IMU and motor method and IMU combined with camera
to complete the dynamic tracking of the robot position.
This project work is a comprehensive application of what we learned in class.
In the process of localization of the static robot, we need to process many sets of
redundant data and find the value with the smallest error. Therefore, we use the Gauss
Newton method to converge to the point with the smallest loss function through
multiple iterations. And for the dynamic tracking of the robot, we chose the Quasi-
Constant Turn Model, which is a nonlinear and time-continuous model. But the data
we sampled is dispersed, so we use the Euler–Maruyama discretization method to
discretize the model. Finally, in addition to using the IMU and motor to predict the
position of the robot, we also want to use the camera to measure and update the
position of the robot. Therefore, we also need a filtering algorithm. Here, we use the
EKF algorithm to fuse the two sets of data to make it more accurately track the
position of the robot.

17

You might also like