project_guide
project_guide
Contents
1 Introduction 3
4 Reporting 14
5 Appendix 14
5.1 Template codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.2 Connecting to the robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.3 Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.3.1 IMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1
5.3.2 Camera module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.3.3 Motor control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.3.4 Line sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.3.5 Data and experiment checklist . . . . . . . . . . . . . . . . . . . . 18
2
Figure 1: DiddyBorg robot with camera module and IR detector.
1 Introduction
The aim of this project is to develop an algorithm for tracking an autonomous robot
by using a set of sensors. The robot, a DiddyBorg rover-type robot, is programmed to
follow a black line inside a closed area surrounded by walls. The robot is equipped with
an inertial measurement unit (IMU), which is a combination of accelerometer, gyroscope,
and magnetometer. In addition to the IMU, the robot is also equipped with an infra
red detector, a motor controller, and a camera module. The IMU will measure the
acceleration as well as the angular rate of the robot from three orthogonal body axis. The
accelerometer measurements and the gyroscope measurements (angular velocity) from
IMU are combined to obtain the acceleration in the inertial frame. The velocity and the
position of the robot are then readily obtained by integration of the acceleration. However,
as time increases, the error will be accumulated, and hence the deviation from the actual
position grows. The camera system will detect several predefined rectangles, which
contain unique QR codes with known position attached in the wall. This measurements
can be used to correct the position estimate obtained by twice integration.
The sensors are connected to the main computer which is a Raspberry Pi. The
Raspberry Pi is responsible to handle all sensor measurement preprocessing and logging.
It can also be used to transfer the recorded measurements data to other means.
The project consists of two parts. In the first part, we develop and verify the sensor
model for the IMU and camera system. This includes the following steps:
In the second part of the project, we combine the sensor model developed in the first
part with a dynamic model and a sequential estimation algorithm to obtain our final
robot tracking system.
The project contains both theoretical and practical parts. In the practical part, you
will be given measurement data (record log files) from different sensors. You can find
the information of log file for each task separately in Appendix 5.3.5. You
3
also need to write two reports: i) Part I report, and ii) Part II report. The final grade of
the project work will be based on the two reports. On the course homepage, you can
find more information about deadlines of the project, and the grading criteria.
y = Gx + b + r
where ki , bi , ri , are the gain, bias, and noise, respectively in i = {x, y, z} axis.
As the case of the accelerometers, the gyroscope also experiences the earth angular
velocity, which may be used as a reference for calibration. However, for the case of MEMS
gyroscope, the earth angular velocity is so small that it is buried in the sensor noise.
From the IMU measurement record log, we can determine the variance matrix of the
IMU measurement noise. This matrix will be used later for tracking purpose.
Task 1a. From the measurement record log file visualize the data. Read the Appendix 5.3.1
in order to understand what each column represents. What do you observe?
Summarize what you have understood and write it down in your report.
Task 1b. Determine the bias and variance of the IMU sensors and write down the result in
your report.
4
2.1.3 IMU calibration (accelerometer calibration)
To calibrate the accelerometer, we can depend on the assumption that the earth gravi-
tational is fixed on a static object and we use the following simple procedure for every
robot body axis:
• Turn on the robot, and run the python script for IMU (see Appendix).
• Place the robot in a firm horizontal support.
• Use a timer to record the log reading at each body axis, For example, 30s or 60s.
• Record the acceleration reading for the robot in up position, au in that direction.
• Rotate the robot 180◦ in the selected axis, and record the acceleration reading for
the robot in up position, ad .
• Calculate the gain for the selected axis as
au − ad
ki = .
2g
Here, g is the gravity.
• Calculate the bias bi for the selected axis By
au + ad
bi = .
2
Task 2. Plot the data from the measurement record log file. What did you observe?
Determine the gain ki and bias bi for each body axis i = {x, y, z}. Write down the
results in your report.
Note that, the coordinate system of IMU sensor can be different than those from
camera and other sensors.
5
Figure 2: Illustration of the robot and camera coordinate systems given by the left hand
axis rule. Notice the camera coordinate system is parallel to the robot coordinate system
6
120
100
80
60
40
20
0
0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02
Figure 4: Relation between distance of QR-codes from the camera and detected height.
logs, you will receive y1 and y2 in pixel units, as well as the unique number associated to
each QR-code. We can convert these values into the position of the robot in the inertial
frame. To do this, we need to estimate the focal length of the camera in pixel unit.
• Turn on the robot and run the python script for camera module (see Appendix).
• Wait until camera module ready.
• Prepare one QR-code only for the calibration.
• Place tape measure below the robot facing perpendicular to the wall with QR-code
as close as possible until the QR-code is detected, and printed in the terminal.
Write down the actual distance of the camera lens to the wall. You do not need to
write the terminal QR code reading as they are stored in log file.
• Increase one or two cm, and hold for several seconds. Repeat this step until the
QR-code cannot be read by the robot.
• Using the log file reading and Equation (2), we can determine the focal length f .
To get an insight of the focal length, plot one over the height against the recorded
distance; you should get nearly linear relation; see Fig 4 . Then you can use the standard
linear least-square regression to obtain the gradient and bias. Notice that the gradient k
that you get is a multiplication of the QR-code length (cm) and the “actual” focal length
in pixel.‘
For each QR code detected, you can estimate the horizontal distance x3 and direction
φ from the QR-code center point to the robot using (2). Let y1 , y2 represent the center
point of the QR-code, and h represents the detected height of QR-code, all given in pixels.
If the actual height of QR-code is given by h0 , then we can measure the robot distance
x3 and heading φ using
h0 f
x3 = + b,
h (3)
7
where b is a bias and
y1
φ = arctan( ). (4)
f
Task 3a. There are two columns in the measurement record log file. The first represents the
measured distance (in cm) and the second column represents the height (in pixel)
measured from the terminal. You need to plot the data as described above and
determine the gradient and bias. Write down the results in your report. Note that,
you also need to consider the distance of the camera from the surface of the robot
which is provided in readme.txt file with the data.
Task 3b. Determine the focal length in pixel from the Equation (3), given that the height h0
of the QR-code is 11.5 cm.
Task 4 There are two columns in the measurement record log file. The first represents
the distance (in cm) measured from the measuring tape and the second column
represents the measured time (in s). You need to determine the speed of the robot.
Write down the results in your report. Note that, you need to determine the
distance interval from the given log file.
3.1 Localization
Localization is the process of estimating the position of a robot with respect to its
environment. It is a fundamental process in autonomous driving vehicles and mobile
robots to localize themselves globally for further decision making. In this part, we are
interested to estimate the robot position and attitude using a camera sensor.
Now that we have both IMU and camera ready, we can use both of them to localize
the robot position. This will serve as a validation to the measurement model of the
camera derived before. Notice that although the camera axis is parallel to robot axis, the
8
center position of the robot differs to the camera lens position. In the robot local axis, the
camera lens position is fixed, but in global axis we will need to take account the direction
of the robot in the global coordinates. The rotation matrix for a counterclockwise rotation
of the 2 × 1 vector a by an angle α around the z-axis is given by
cos(α) − sin(α)
R(α) = (5)
sin(α) cos(α)
a0 = R(α)a. (6)
We will now place the robot with camera facing one of the walls. Make sure that the
camera is able to detect as many as QR-codes possible (See Figure 5). We may need to
use a smaller size QR-code if the camera does not able to detect more than one QR-code
at the same time. You can check the Appendix 5.3.2 on how to adjust the python script
for this case. We measure the exact global position of the robot for a reference. Also, we
measure the QR-codes position in a global coordinate.
We assume that the global position of each QR-code is (sxi , syi ). If the global position
and heading of the robot are (pxi , pyi , ψ), then for each QR-code, the measurement model
for camera can be defined as
q
di = (sxi − pxi )2 + (syi − pyi )2
(7)
φi = arctan((syi − pyi )/(sxi − pxi )) − ψ
If we know the exact distance di and the heading φi of each QR-code, then we can
estimate pxi , pyi , ψ. In general, these are not known. However, since we know the focal
length f of the camera, we can recover di and φi using Equation (3) and (4). The
reading from the camera gives the height hi and the center position (Cx,i , Cy,i ) of each
QR-code in the image plane and the true height h0 of the QR-code is known i.e. 11.5cm,
we can get
h0 f
di =
hi
(8)
Cx,i
φi = arctan( )
f
Therefore, we have
h0 f
hi = p
(sxi − x
pi )2 + (syi − pyi )2 (9)
Cx,i = f tan (arctan((syi − pyi )/(sxi − pxi )) − ψ) .
y = g(x). (10)
9
Figure 5: Detected QR-codes by CameraModule script.
Task 5a. Describe the relation between the QR-codes global coordinates and the robot’s
static position in your report; see Figure 7 as a reference. What is the minimum
number of different QR-codes that are needed to estimate the position and attitude
of the robot in global coordinates?
Task 5b. Next, use nonlinear (weighted) least square technique to estimate the position and
heading of the robot. To do this, you need to derive the Jacobian of measurement
model that you choose and the measurement variance matrix for the camera module.
Hints:
• Try to identify how many qr codes are detected by the camera at each time step
from the record log file.
• The global position of the qr codes are given in qr code position in global coordinate.csv
file. Also, note that, the true position of the robot with respect to the frame/wall
is given.
• You need to consider the focal length f that you found in Task 3 to correct the
distance in the measurement log file.
• The measurement model g(x) is defined in Equation (7) or Equation (9). Choose
appropriate Jacobian of measurement model. For this, check the lecture 4, slide 18
as a starting point.
• Choose an appropriate nonlinear optimization method to estimate the position
and heading of the robot with respect to the frame (i.e., the global position). See
lecture 4 and 5.
10
Figure 6: The predefined semi-elliptical tracking path.
3.2 Tracking
In this part, we extend these results and develop the models and algorithm necessary for
tracking the robot when moving. The tasks are to:
11
A quasi-constant turn model can be expressed as follows:
where
>
• state x(t) = px (t) py (t) v(t) ϕ(t) ,
The gyroscope measures v̇(t) and the acceleratometer measures ϕ̇(t). Hence, we can
rewrite v̇(t) = aacc (t) + w1 (t) and ϕ̇(t) = ωgyro (t) + w2 (t). In general, the accelerometer
measurements are not accurate enough. We can get the speed v(t) directly from wheels,
for example. Thus, we can have a reduced quasi-constant turn model as
Task 6a. Choose a dynamic model to model the motion of the robot. The robot moves in
two dimensions and hence, a two dimensional model is needed. Once you have
chosen a suitable motion model, discretize it using an appropriate discretization
method. The inputs to the motor in terms of pulse width modulation are recorded
which gives the velocity and the gyroscope gives the turning rate. You should use
these inputs in your dynamic model.
Task 6b. Do dead-reckoning (i.e., prediction) based on speed measurements from motor
control and turn rate measurements from gyroscope.
Task 6c. Compare your dead-reckoning result with the predefined track in Figure 6.
12
Figure 7: Illustration of the global coordinate system, QR-codes with known positions in
global coordinate, and the robot local acceleration. When the robot camera heads to the
y axis, the global angle of the robot equal to zero.
Hints:
• Use the quasi-constant turn model as your dynamic model. Also make sure that
the sign of the gyroscope measurement is correct and that its units are converted
to radians.
• We will use the gyroscope to measure the heading.
• You can consider the velocity to be the average of input pulse applied to each wheel.
If you are interested to build up more complicated model, you can consider the
diameter and width of the wheels as well as the distance between right and left
wheels. The diameter and width of each wheel are 65mm and 25mm, respectively.
The difference between the left and right wheels is 180mm.
Task 7. Discretize the dynamic model and implement a nonlinear filter (e.g., EKF or particle
13
filter) to estimate the robot position and heading in global coordinates.
Hint:
• The model and filtering algorithms may be quite sensitive to the tuning parameters
(spectral density of the process noise and measurement noise). For the process noise,
think about what it actually represents physically and relate it to the robots motion.
For the measurement noise, you may estimate it based on the model parameter
estimation data and possibly add some margin to account for modeling errors.
4 Reporting
To report the technical results of a project is an important skill. Reporting should be done
using concise and accurate language, including enough detail such that someone with the
same education and background can understand, interpret, and reproduce your work.
Your derivations, results, and interpretations should be backed up by data, illustrations,
and so forth. Furthermore, also make sure that you answer and discuss the questions
raised in this guide.
Your project report should include at least the following1 :
• An abstract that briefly summarizes what you have done and what the results are,
• a brief introduction to the project and the problem that you are solving,
• derivation of the model,
• calibration procedures that are used (if any).
• description of the estimation method(s) used for parameter estimation, validation,
and tracking,
• the results,
• conclusions and/or a summary,
• references (if applicable).
Note that, for Part I report, not all of the above items might apply yet.
For both parts (Part I and Part II), your submission should consist of a PDF report
and the code(s) (Python/MATLAB) with solutions used for the tasks.
5 Appendix
In this section we will describe how to connect to the robot platform via SSH, and how
to run the simulation.
https://github.com/EEA-sensors/elec-e8740-project-code-template
1
You are free to use whatever structure you prefer, as long as it is consistent and logical.
14
Also note that scripts described in this section can also be found in the aforementioned
link.
• Turn on the robot using switch located at the bottom of the robot.
• Find the IP address of the robot. Check the MAC address written in the upper
part of the robot. Then use an ip scanner to find the associated IP address. If you
facing a difficulty to find the IP address, you could also connect a monitor to the
robot via an HDMI port located behind the camera (unscrew top part of the robot
first). Then connect usb keyboard to robot’s USB ports. Then execute ifconfig.
• Once you found the IP address of the robot, you can connect to the robot via SSH,
with user name:pi, password: pipipipi. You can also automate this process as
below:
~/Git/DiddyBorg_Sensor_Fusion
5.3 Modules
To run the module change the directory to ~/Git/DiddyBorg_Sensor_Fusion/data. All
log files will be stored in this folder. There are three files that you need to run the robot
properly:
• IMU.py
• CameraModule.py
• MotorControl.py
15
Pressing Escape button will immediately stop the python scripts, except on CameraModule.py
while not showing any video output. Ideally, you should execute IMU.py command un-
til it is ready, and then CameraModule.py. Once the camera is ready, then execute
MotorControl.py.
You can modify (and encougared!) these files to accommodate your need. Should
you need to reset the configuration to default, you can always pull from Git.
git pull
5.3.1 IMU
After changing the directory to ~/Git/DiddyBorg_Sensor_Fusion/data, to collect read-
ing from IMU, you can run
python3 ../Diddyborg_python/IMU.py
You can also specify the output file name using --output=some_files.csv. Default
sampling times is 0.05, and you can modify it with --sampling=0.1.
The IMU log files column are, the Timestamp in ms, linear acceleration in x,y,z axis
given in gravity unit, roll and pitch angle from accelerometer in degree, gyroscope x,y,z
in degree/s, and magnetometer field strength in x,y,z axis in Gauss unit.
python3 ../Diddyborg_python/CameraModule.py
You can also specify the output file name using --output=some_files.csv. If you have
access to X-server when connecting to the robot (using VNC, for example), you can
also specify --show to show the video stream from the camera module. You can also
specify the QR-code length in cm using --qrlength=xx where the input is given in cm.
The IMU log files column are, the Timestamp in ms, QR-code number, center position
(Cx,i , Cy,i ) of QR-code in pixel, width and height of QR-code in pixel, raw distance from
camera to QR-code in cm, raw attitude of the QR-code relative to the camera in degree.
python3 ../Diddyborg_python/MotorControl.py
You can also specify the output file name using --output=some_files.csv. The IMU
log files column are, the Timestamp in ms, first and second inputs as a percentage of
pulse width modulation (PWM). The input signal is between 0-1.
16
Figure 8: IR Line Detector
17
5.3.5 Data and experiment checklist
18
References
[1] U. Qureshi and F. Golnaraghi, “An algorithm for the in-field calibration of a MEMS
IMU,” IEEE Sensors Journal, vol. 17, no. 22, pp. 7479–7486, nov 2017.
[4] D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach, 2nd ed. Pearson,
2011.
19