0% found this document useful (0 votes)
9 views112 pages

Reasearch Paper

This thesis explores the autonomous landing of an unmanned aerial vehicle (UAV) on an unmanned ground vehicle (UGV) in scenarios where GNSS is unavailable. It presents a control and estimation system that integrates various sensors and control laws to achieve safe landing, demonstrating promising results in both experimental and simulation environments. The research highlights the feasibility of such systems for military applications, emphasizing the importance of sensor placement and system design.

Uploaded by

puru20062004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views112 pages

Reasearch Paper

This thesis explores the autonomous landing of an unmanned aerial vehicle (UAV) on an unmanned ground vehicle (UGV) in scenarios where GNSS is unavailable. It presents a control and estimation system that integrates various sensors and control laws to achieve safe landing, demonstrating promising results in both experimental and simulation environments. The research highlights the feasibility of such systems for military applications, emphasizing the importance of sensor placement and system design.

Uploaded by

puru20062004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 112

Master of Science Thesis in Electrical Engineering

Department of Electrical Engineering, Linköping University, 2020

Autonomous Landing of an
Unmanned Aerial Vehicle
on an Unmanned Ground
Vehicle in a GNSS-denied
scenario

Albin Andersson Jagesten and Alexander


Källström
Master of Science Thesis in Electrical Engineering
Autonomous Landing of an Unmanned Aerial Vehicle on an Unmanned Ground
Vehicle in a GNSS-denied scenario:
Albin Andersson Jagesten and Alexander Källström
LiTH-ISY-EX–20/5327–SE

Supervisor: Anton Kullberg


isy, Linköpings universitet
Jouni Rantakokko
Swedish Defence Research Agency
Jonas Nygårds
Swedish Defence Research Agency
Joakim Rydell
Swedish Defence Research Agency

Examiner: Gustaf Hendeby


isy, Linköpings universitet

Division of Automatic Control


Department of Electrical Engineering
Linköping University
SE-581 83 Linköping, Sweden

Copyright © 2020 Albin Andersson Jagesten and Alexander Källström


Abstract
An autonomous system consisting of an unmanned aerial vehicle (uav) in coop-
eration with an unmanned ground vehicle (ugv) is of interest in applications
for military reconnaissance, surveillance and target acquisition (rsta). The basic
idea of such a system is to take advantage of the vehicles strengths and counteract
their weaknesses. The cooperation aspect suggests that the uav is capable of au-
tonomously landing on the ugv. A fundamental part of the landing is to localise
the uav with respect to the ugv. Traditional navigation systems utilise global
navigation satellite system (gnss) receivers for localisation. gnss receivers have
many advantages, but they are sensitive to interference and spoofing. Therefore,
this thesis investigates the feasibility of autonomous landing in a gnss-denied
scenario.
The proposed landing system is divided into a control and an estimation sys-
tem. The control system uses a proportional navigation (pn) control law to ap-
proach the ugv. When sufficiently close, a proportional-integral-derivative (pid)
controller is used to match the movements of the ugv and perform a controlled
descent and landing. The estimation system comprises an extended Kalman fil-
ter that utilises measurements from a camera, an imu and ultra-wide band (uwb)
impulse radios. The landing system is composed of various results from previous
research.
First, the sensors used by the landing system are evaluated experimentally to
get an understanding of their characteristics. The results are then used to deter-
mine the optimal sensor placements, in the design of the ekf, as well as, to shape
the simulation environment and make it realistic. The simulation environment
is used to evaluate the proposed landing system. The combined system is able
to land the uav safely on the moving ugv, confirming a fully-functional landing
system. Additionally, the estimation system is evaluated experimentally, with re-
sults comparable to those obtained in simulation. The overall results are promis-
ing for the possibility of using the landing system with the presented hardware
platform to perform a successful landing.

iii
Acknowledgments
We would like to thank our supervisors Jouni Rantakokko, Jonas Nygårds and
Joakim Rydell at foi, who have all shown interest in this work and helped us
throughout this master’s thesis project. Special thanks goes to Jouni Rantakokko
for his thorough work in proofreading and giving criticisms on this thesis, and
Joakim Rydell who assisted in the experimental test.
At Linköping Univerisity, we would like to thank our supervisor Anton Kull-
berg for his help with this thesis. We would also like to take the opportunity to
thank our opponents Carl Hynén Ulfsjö and Theodor Westny for their construc-
tive comments. Finally, we would like to thank our examinator Gustaf Hendeby
for his great interest throughout this master’s thesis project as well as his aid
during the experimental test.
As a closing remark, we would like to praise both foi and Linköping Uni-
versity for doing their very best in making sure that this master’s thesis could
proceed, despite the covid-19 pandemic.

Linköping, June 2020


Albin Andersson Jagesten and Alexander Källström

v
Contents

Notation xi

1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Outline of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 System Description 7
2.1 Unmanned Aerial Vehicle . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.1 Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.2 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Unmanned Ground Vehicle . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.1 Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.2 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Software Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 Robot Operating System . . . . . . . . . . . . . . . . . . . . 11
2.3.2 ArduCopter Flight Controller . . . . . . . . . . . . . . . . . 11
2.4 Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . . 11

3 Theory 13
3.1 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Rotation Representations . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3 Quadcopter Dynamical Model . . . . . . . . . . . . . . . . . . . . . 15
3.4 Kalman Filter Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.4.1 Measurement Outliers . . . . . . . . . . . . . . . . . . . . . 17
3.4.2 Measurement Information Gain . . . . . . . . . . . . . . . . 18
3.5 Position Trilateration . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.6 Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.7 Proportional-Integral-Derivative Control . . . . . . . . . . . . . . . 20

vii
viii Contents

3.8 Proportional Navigation Control . . . . . . . . . . . . . . . . . . . 20


3.9 Barometric Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4 Estimation System 23
4.1 Estimation System Description . . . . . . . . . . . . . . . . . . . . 23
4.2 Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3 Attitude Measurements . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.4 Ultra-Wide Band Sensor Network . . . . . . . . . . . . . . . . . . . 26
4.4.1 Sensor Model . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4.2 Model Parameter Tests . . . . . . . . . . . . . . . . . . . . . 27
4.4.3 Anchor Configuration . . . . . . . . . . . . . . . . . . . . . 30
4.5 Camera System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.5.1 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.5.2 Fiducial Detection Algorithm . . . . . . . . . . . . . . . . . 36
4.5.3 Fiducial Marker . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.5.4 Coordinate Transform . . . . . . . . . . . . . . . . . . . . . 38
4.5.5 Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.6 Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.7 Reference Barometer . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.7.1 Sensor Model . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.7.2 Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5 Control System 47
5.1 Control System Description . . . . . . . . . . . . . . . . . . . . . . 47
5.2 State Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2.2 Follow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2.3 Descend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.3.1 Vertical Controller . . . . . . . . . . . . . . . . . . . . . . . 53
5.3.2 Horizontal pid Controller . . . . . . . . . . . . . . . . . . . 53
5.3.3 Proportional navigation Control . . . . . . . . . . . . . . . . 55
5.4 Control System Evaluations . . . . . . . . . . . . . . . . . . . . . . 57

6 Landing System Evaluation 65


6.1 Landing System Simulation . . . . . . . . . . . . . . . . . . . . . . 65
6.2 Estimation System Validation . . . . . . . . . . . . . . . . . . . . . 71

7 Conclusion and Future Work 79


7.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

A Hardware Drivers 85
A.1 XSens Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
A.2 Camera Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
A.3 Decawave data extraction . . . . . . . . . . . . . . . . . . . . . . . . 85
Contents ix

B Simulated Sensors 87
B.1 Attitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
B.2 Ultra-Wide Band Sensor Network . . . . . . . . . . . . . . . . . . . 87
B.3 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
B.4 Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
B.5 Reference Barometer . . . . . . . . . . . . . . . . . . . . . . . . . . 88

C Gazebo Plugins 89
C.1 Wind Force Plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
C.2 Air Resistance Plugin . . . . . . . . . . . . . . . . . . . . . . . . . . 89
C.3 Random Trajectory Plugin . . . . . . . . . . . . . . . . . . . . . . . 90

D Additional Results 91
D.1 Time Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
D.2 Root Mean Square Error . . . . . . . . . . . . . . . . . . . . . . . . 93

Bibliography 97
Notation

Notations

Notation Meaning
[x, y, z] Position in a Cartesian coordinate system
[φ, θ, ψ] Euler angles roll, pitch and yaw
{a} Coordinate frame a
aR Rotation matrix from {b} to {a}
b
x State vector
cx cos x
sx sin x
R Covariance matrix
|| · ||n n-norm of ·
ȧ Time derivative of a
⊗ Kronecker product
AT Transpose of matrix A
B Definition statement

xi
xii Notation

Abbreviations

Abbreviation Meaning
awgn Additive White Gaussian Noise
crlb Cramér-Rao Lower Bound
csi Camera Serial Interface
foi Swedish Defence Research Agency
gnss Global Navigation Satellite System
gps Global Positioning System
ekf Extended Kalman Filter
fov Field of View
imu Inertial Measurement Unit
ins Inertial Navigation System
los Line of Sight
mpc Model Predictive Control
nis Normalised Innovation Squared
nlos Non Line of Sight
pd Proportional, Differential (controller)
pdf Probability Density Function
pid Proportional, Integral, Differential (controller)
pn Proportional Navigation
rmse Root Mean Square Error
ros Robot Operating System
rtk Real Time Kinematic
rsta Reconnaissance, Surveillance and Target Acquisition
sitl Software In The Loop
toa Time of Arrival
uav Unmanned Aerial Vehicle
ugv Unmanned Ground Vehicle
uwb Ultra-Wide Band
Introduction
1
This master’s thesis project was conducted at the Swedish Defence Research Agency
(foi) at the Division of C4ISR1 in Linköping. The main goal of the project was to
investigate the possibility of landing a quadrotor Unmanned Aerial Vehicle (uav)
on top of a mobile Unmanned Ground Vehicle (ugv) in a gnss-denied scenario.
The purpose of this chapter is to present the background as to why the project
was conducted, related work done in the research community, the approach of the
thesis, the problems the thesis aims to answer and the limitations and contribu-
tions of the project. Finally, an outline of the thesis is provided.

1.1 Background
During the last couple of decades, the interest for, as well as the research and de-
velopment (R&D) on, unmanned and autonomous systems has grown rapidly. In
this thesis, autonomous systems are defined as systems which can operate with-
out direct human control or intervention. The various R&D activities have re-
sulted in unmanned systems that can perform increasingly more complex tasks
and support military operations in all domains (air, ground and sea). Currently,
the R&D community is exploring various approaches for combining different
types of unmanned platforms, which can complement each other’s abilities. One
such conceivable scenario is an unmanned ground vehicle (ugv) acting as a host
platform for an unmanned aerial vehicle (uav) to carry out various missions, as
explored in [1]. The underlying idea is to exploit the advantages while mitigating
the disadvantages for both types of vehicles. The ground vehicle can continue to
operate during harsh weather conditions and has the endurance to operate for an
extensive period. In contrast, small uav:s have limited endurance. The benefit
1 Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance

1
2 1 Introduction

of the uav is instead that it can move at higher speeds, can cover large areas and
provide an overview of the surrounding environment. The uav may also act as
an elevated communication relay. Additionally, the ugv acts as a transport and
recharge station for the uav.
The task of developing a combined uav and ugv reconnaissance, surveillance
and target acquisition (rsta) system has several non-trivial research challenges,
which each could encapsulate a single thesis project. Therefore, the challenge
which this master’s thesis is exploring concerns the autonomous landing of the
uav on top of a moving ugv.
In order to perform an autonomous landing, a basic requirement is to esti-
mate the relative position and orientation of the uav with respect to the ugv.
The localisation is performed with sensors on the uav as well as on the ugv. Be-
cause of the military application and the importance of the landing phase, there
are requirements on the sensors regarding robustness as well as low probability
of detection. Additionally, a future goal is for the uav to be able to land at night
time as well, which should be considered in the choice of sensors. One common
approach to localise a uav is with the use of global navigation satellite system
(gnss) receivers. Though gnss-receivers are small, low cost and readily avail-
able, they do come with some flaws. In urban environments gnss signals can be
subjected to multi-path propagation, attenuation and diffraction which can im-
pair the accuracy of the gnss-receivers [2]. gnss-receivers are also sensitive to
jamming and spoofing. For example Kerns et al. [3] were able to send a drone into
an unwanted dive through spoofing of a global positioning system (gps) signal.
Because of these flaws, this thesis will investigate the possibility of autonomously
landing a uav on a moving ugv in a gnss-denied scenario.
A localisation technique that has gained attention lately, is the use of ultra-
wide band (uwb) impulse radios. uwb radios send data at bandwidths surpassing
500 MHz. The large bandwidth has many advantages, such as being insensitive
to disturbances as well as not being affected by multipath propagation [4]. Addi-
tionally, it results in a low power spectral density, which reduces the risk of being
detected. A thorough description of how uwb radios work and how they can be
used for localisation is provided in [4]. The standard approach of localisation
using uwb radios is based on having multiple stationary uwb radios (anchors)
placed in an environment in which a mobile uwb radio (tag) is to be located. The
distances between the tag and the anchors are measured and used to calculate an
estimate of the tag’s position relative to the anchors. When combining the dis-
tance measurements, a 3D position can be triangulated with a minimum of four
anchors [5].

1.2 Related Work


The task of landing a uav on a mobile ugv is not entirely new and has been stud-
ied before in various scientific publications. A couple of research contributions
propose solutions that works well in real world conditions with the ugv mov-
ing at a relatively high velocity. Persson [6] was able to land a fixed-wing uav
1.2 Related Work 3

on a car moving at 70 km/h. During the landings a pid control structure was
used. The position estimation of the vehicles was achieved using two real time
kinematic (rtk) gps receivers together with a camera system measuring the rela-
tive position between the vehicles. The work was extended in [7] where a model
predictive control (mpc) structure was used instead.
In [8] a similar result was achieved with a quadrotor uav. Borowczyk et al.
managed to land the quadrotor on a car travelling at speeds up to 50 km/h. In this
case a proportional navigation (pn) controller was used to approach the car. A
switching strategy was used, where a proportional-derivative (pd) controller was
used during the landing phase. An inertial measurement unit (imu), a camera
and gps receivers were used together with a Kalman filter to estimate the motion
of the two vehicles.
In 2017, the International Robotics Challenge (MBZIRC) competition was or-
ganised with the goal of expanding the state-of-the-art in a variety of robotics do-
mains [9]. One challenge in the competition was to autonomously land a uav on
a moving ugv (15 km/h). The system Baca et al. [9] participated with achieved
the shortest landing time. In their solution a mpc structure was used. To get a
state estimate of the uav’s movements, inertial sensors, a rangefinder, a camera
and an rtk gps receiver was used. However, no sensors could be placed on the
ugv. Baca et al. therefore used a prediction strategy for the ugv’s movements.
Although these research groups achieved the goal of autonomous landing, all the
proposed approaches rely on gnss-receivers.
Autonomous landing of a uav in a gnss-denied scenario is an active research
field, primary considering confined indoor environments, and the concept have
been investigated by various research groups. The most common strategy is posi-
tioning based on visual data, usually a camera identifying a fiducial marker. In
[10] a vision-based detection algorithm is used to estimate the 3D-position of a
uav relative a tag. Wenzel et al. [11] utilise an IR-camera mounted on a uav
to track a pattern of IR-lights on a moving platform. The AprilTag fiducial algo-
rithm is used with a camera for positioning during an autonomous landing of a
uav in [12] and [13]. The common denominator of these works is that they all
utilise a proportional-integral-derivative (pid) controller in order to accomplish
the autonomous landing. The focus on visual data in these works means that they
can only land from a starting position in a close proximity of the ugv, since the
ugv must be within the camera’s field of view (fov) during the majority of the
landing.
An approach to gnss-denied positioning that works at longer distances is by
using uwb anchors positioning a tag in a sensor network. For example, [14, 15]
examined the specific task of estimating the position of a uav with uwb radios.
These articles show successful results; however, the positioning is conducted in
a closed environment where the tag (on the uav) is positioned in the volume
spanned by the anchors. Such an approach is not feasible in an arbitrary environ-
ment. Lazzari et al. [5] numerically investigate the approach of having anchors
mounted on a ugv and a tag mounted on a uav. The results from [5] are promis-
ing, with 3D-positioning of the uav achieved at distances up to 80 m.
4 1 Introduction

1.3 Approach
In this work, the estimation techniques used in gnss-denied scenarios are com-
bined with control approaches that has proven to work well in challenging real-
world scenarios. The landing system is divided into an estimation system and a
control system. The estimation system has the goal of estimating the relative posi-
tion between the vehicles. The control system utilises these estimates to steer the
uav towards the ugv, and eventually land.
The approach in the estimation system is to substitute gnss-receivers with
a uwb sensor network. The uwb sensor network consists of four uwb anchors
mounted on the ugv measuring the distance to a uwb tag, which is mounted on
the uav. To aid in the relative positioning at shorter distances, a camera system
consisting of a uav-mounted camera identifying a fiducial marker on the ugv
is added. Additionally, an imu is mounted on each vehicle. The imu provides
3D accelerations, rotation rates as well as magnetic field and air pressure (scalar)
measurements. An extended Kalman filter (ekf) utilises these sensors to estimate
a relative position between the vehicles.
The proposed control system is mainly based on [8]. Two control laws are
used. A pn law is used to approach the ugv. At closer distances, the control
system switches to a pid controller with the goal of matching the movement of
the ugv, while descending towards it.

1.4 Problem Formulation


The high level goal of the system is to land the uav in a safe and controlled
manner on the ugv, by utilising the sensors mounted on both vehicles. Based on
the approach presented, the thesis aims to answer the following questions:

• How should the estimation system be designed in order to provide adequate


state estimates for a landing in a gnss-denied scenario?

• How should the control system be designed to achieve a controlled landing


on a mobile ugv?

Also, since the long-term goal is a system that can land independent of weather
conditions or time of day, the following question is also interesting:

• Is the camera vital in the estimation system?

1.5 Limitations
This thesis will only consider the landing phase for the uav, which is when the
uav is at most 50 m away from the ugv. The ugv is therefore assumed to be
in line of sight of the uav during the entire landing task. Additionally, it is
assumed that the landing takes place in an obstacle-free environment. The ugv
is also assumed to be moving only in the horizontal plane. Finally, the landing
1.6 Contributions 5

system can only control the uav when performing the landing manoeuvre. These
limitations do not restrict the core problem of autonomously landing a uav in a
gnss-denied scenario.

1.6 Contributions
Due to the complexity of the task, openly available software is utilised whenever
available. The main contribution of the thesis is the selection and evaluation of
estimation and control algorithms, as well as the design and software integration
of the system. More specifically, the contributions in this thesis are the following:
• Evaluation of uwb radios performance.
• Analysis of uwb anchor configuration using crlb.
• Design of a complete landing system, consisting of an estimation and a
control system.
• Evaluation of the landing system in a realistic simulation environment.
• Experimental evaluation of the estimation system’s performance during the
final descend.
With regard to the personal contributions, Albin Andersson Jagesten was re-
sponsible for integrating the simulation environment, implementing the landing
system and the development of the Control System. Alexander Källström had the
responsibility of the development of the Estimation System, the evaluation of the
uwb radios, analysis of the uwb anchor configuration as well as the experimental
evaluation of the estimation system. However, both authors were involved in the
development of all the various parts of this master’s thesis project.

1.7 Outline of Thesis


The outline of this thesis is:
• Chapter 2 gives a description of the uav and ugv, as well as the sensors
mounted on each vehicle. The simulation environment is presented as well.
• Chapter 3 presents the theory behind the topics relevant to this work.
• Chapter 4 describes the estimation system, as well as a description and eval-
uation of each sensor.
• Chapter 5 presents the idea behind, and implementation of the control sys-
tem as well as the results generated in simulation.
• Chapter 6 describes the results generated in simulation using both the es-
timation and the control system. Experimental results of the estimation
system are also presented.
• Chapter 7 discusses the main results and presents ideas for future work.
System Description
2
This chapter aims to describe the different parts that comprise the landing sys-
tem including hardware platforms, sensors and software. A description of the
simulation environment is provided as well. The hardware platform, as well as,
the sensors mounted on the uav and ugv are described in Section 2.1 and 2.2,
respectively. A short description of the software tools used in the project is pro-
vided in Section 2.3. Finally, the simulation environment is presented in Section
2.4.

2.1 Unmanned Aerial Vehicle


The uav consists of a LIN700 platform equipped with a 3DR Pixhawk Mini2 au-
topilot. The Pixhawk Mini is running the open-source flight controller firmware
ArduCopter. However, the main processing unit on the uav is a Jetson Nano De-
veloper Kit3 single-board computer. The Jetson Nano handles the data from the
sensors on the uav, ugv and from the Pixhawk Mini. Based on the received sen-
sor data, the Jetson sends appropriate control commands to the Pixhawk with the
goal of landing the uav on the ugv. An overview of the uav system architecture
is presented in Figure 2.1. The individual parts of the uav are described in the
following subsections.

2.1.1 Platform
The uav used in this project is a quadrotor platform that can be seen in Figure
2.2. The uav has the following sensors mounted on it:
2 For a detailed description, see: https://docs.px4.io/v1.9.0/en/flight_controller/pixhawk_mini.html
3 Jetson Nano Developer Kit documentation: https://developer.nvidia.com/embedded/jetson-
nano-developer-kit

7
8 2 System Description

UAV
Sensors
IMU
Sensor Data
UWB Sensor Data Jetson Pixhawk
Control Commands Motors
Tag Nano Mini

Camera

Sensor Data UGV

Figure 2.1: Overview of the components that make up the landing system.

• A Decawave DWM1001-DEV uwb module.


• A Raspberry Pi Camera Module V2.
• An XSens MTi-100 imu.

The sensors are mounted on a platform, see Figure 2.3, which in turn is mounted
underneath the uav. The sensors are connected to the Jetson Nano processor unit,
which is also mounted on the platform. The drivers used to communicate with
the sensors are described in Appendix A.

2.1.2 Sensors
The uav is equipped with a Decawave DWM1001 uwb module acting as a uwb
tag in the uwb sensor network. The DWM1001 contains a uwb-transceiver as
well as a processing unit. The uwb tag measures the individual distances to the
anchors in the uwb sensor network.
The Raspberry Pi Camera Module V2 is a camera facing downwards from the
uav. The camera has a horizontal field of view (fov) of 62.2◦ and is configured
to send images with a dimension of 640 × 480 pixels at a rate of 10 Hz.
The Xsens MTi-100 imu is utilised for its accelerometer and barometer mea-
surements. The three-axis accelerometer measures the acceleration vector of the
uav and the barometer measures the pressure of the air around the uav.
As presented in Figure 2.1, the Jetson Nano receives sensor data from the
Pixhawk Mini, which is in the form of an attitude estimate of the uav.
2.1 Unmanned Aerial Vehicle 9

Figure 2.2: LIN700 uav platform.

Jetson Nano

Camera

IMU

UWB Tag

Figure 2.3: Sensor platform attached to the uav.


10 2 System Description

2.2 Unmanned Ground Vehicle


The ugv platform and the sensors mounted on it are described.

2.2.1 Platform
The ugv is a modified electrical wheelchair. It has been used as a test platform
in previous research at foi [16, 17]. On top of the wheelchair a landing platform
is mounted with dimensions 1.5 x 1.5 m2 . A fiducial marker with dimensions 1.4
x 1.4 m2 is placed on the centre of the landing platform. Four 0.5 m long beams
pointing upwards are attached to each of the platform’s corners. The beams are
used for sensor placement. The following sensors have been mounted on the
ugv:

• Four Decawave DWM1001 uwb modules.

• An XSens MTi-600 DEV imu.

The resulting ugv platform can be seen in Figure 2.4.

Figure 2.4: ugv platform with four uwb anchors and an AprilTag fiducial
marker. The uwb anchors are highlighted with red.
2.3 Software Tools 11

2.2.2 Sensors
The four Decawave DWM1001 uwb modules are used as anchors in the uwb
sensor network. The uwb modules are attached to the landing platform corners,
two in level with the platform and two on top of the beams, see Figure 2.4.
The Xsens imu provides acceleration and air pressure measurements, as well
as estimates of the ugv orientation.

2.3 Software Tools


This section describes the main software tools utilised in the system.

2.3.1 Robot Operating System


The Robot Operating System4 (ros) is an open-source framework for robot soft-
ware. The framework provides a communication interface together with a vari-
ety of open-source software packages for robot applications. Additionally, open-
source tools for hardware abstraction, data visualisation, data recording and data
playback that support the ros communication interface are available.

2.3.2 ArduCopter Flight Controller


ArduCopter5 is an open-source uav flight controller made by ArduPilot. The
flight controller supports a variety of multi-rotor uav’s and in this thesis a quad-
copter is used. An open-source software in the loop (sitl) simulator6 of the flight
controller is utilised. The sitl simulator enables the use of the flight controller
when flying a simulated quadcopter in Gazebo.

2.4 Simulation Environment


To allow efficient and safe development of the landing system, a realistic simu-
lation environment is needed. In this work the open source simulation environ-
ment Gazebo is used. Gazebo is specifically tailored for robotics and it is fully
compatible with ros and the ArduCopter sitl implementation.
A uav Gazebo model is included in the ArduCopter sitl implementation.
The model is slightly modified such that it has a fixed downfacing camera. The
uav model can be seen in Figure 2.5. The model is equipped with the same
sensors as the real uav. The ugv model was created from scratch in Gazebo.
It can be seen in Figure 2.6. The ugv is equipped with an imu and four uwb-
anchors. Plugins, which are blocks of code that executes during runtime, are used
to interact with the models. The plugins created in this work are described in
4 For more information about the robot operating system, see: https://www.ros.org/about-ros/
5 ArduCopter documentation: https://ardupilot.org/copter/
6 ArduPilot Software In The Loop simulator documentation: https://ardupilot.org/dev/docs/sitl-
simulator-software-in-the-loop.html
12 2 System Description

Appendix C. The implementation of the simulated sensors used in the simulation


environment are presented in Appendix B.

Figure 2.5: Gazebo uav simulation model.

Figure 2.6: Gazebo ugv simulation model.


Theory
3
This chapter presents the various theoretical concepts that are applied in this
thesis.

3.1 Coordinate Systems


This section describes the coordinate frames used in this thesis. Three main co-
ordinate frames are introduced in Figure 3.2. The frame {W} denotes the world
frame where both the uav and the ugv reside. The frame {UAV} and the frame
{UGV} represent the body fixed frame for the uav and the ugv respectively. The
placement of the uwb sensors on the ugv can be seen in Figure 3.1. The uav has
a camera and an imu attached. The orientation of the two sensors in the {UAV}
frame can be seen in Figure 3.3.
UWB 2
UWB 1
UWB 1 UWB 4

X
{UGV}
Z
Y
. {UGV}
Z
Y
X

UWB 3 UWB 2
UWB 4
UWB 3

(b) View from behind.


(a) View from above.

Figure 3.1: The placement of the UWB sensors in the {UGV} frame.

13
14 3 Theory

Figure 3.2: The three main coordinate frames. {W} denotes the world frame ,
{UAV} denote the body fixed coordinate frame of the uav and {UGV} denote
the body fixed coordinate frame of the ugv.

UAV Front

Camera IMU
Y X Z
.

Z X
. Y

X
{UAV}
Z
Y

Figure 3.3: The camera and imu placement and orientation in the {UAV}
frame. The view is from underneath.
3.2 Rotation Representations 15

3.2 Rotation Representations


This section introduces two ways of representing the rotation of one coordinate
frame with respect to another. The first representation is with Euler angles and
the second with a unit quaternion vector.
The Euler angles representation is based on three subsequent coordinate ro-
tations. These coordinate rotations can describe any arbitrary rotation [18]. The
rotations are specified by the rotation angles roll, pitch and yaw. In this work the
right-hand rule is used as the rotation convention. The roll angle, φ, represents
a rotation around the x-axis, the pitch angle, θ, represents a rotation around the
y-axis and the yaw angle, ψ, represents a rotation around the z-axis. The rotation
matrices are expressed as follows

1 0 0   cθ 0 sθ  cψ −sψ 0


     
Rφ = 0 cφ −sφ  , Rθ =  0 1 0  , Rψ =  sψ cψ 0 , (3.1)
  
0 sφ cφ −sθ 0 cθ 0 0 1
     

where cv = cos(v) and sv = sin(v).


These three rotation matrices form the rotation matrix
a
Rb = Rψ Rθ Rφ , (3.2)
where a and b are two coordinate frames and b is rotated relative to a. A vector
xb ∈ R3 expressed in frame b can be transformed to frame a as follows
xa = a Rb xb . (3.3)
A unit quaternion q is a vector with length one that comprises four real num-
bers q = [x, y, z, w]T . One way of understanding the quaternion representation
is to divide the quaternion vector into a vector part [x, y, z]T and a scalar part w.
Any finite rotation can be described by a single rotation α around a normalised
axis n [18]. The vector part [x, y, z]T is equal to n sin( α2 ) and the scalar part w is
equal to cos( α2 ). The resulting quaternion is therefore

n sin( α2 )
" #
q= . (3.4)
cos( α2 )
The conversion from Euler angles to a unit quaternion vector is
x = sin(φ/2) cos(θ/2) cos(ψ/2) − cos(φ/2) sin(θ/2) sin(ψ/2) (3.5a)
y = cos(φ/2) sin(θ/2) cos(ψ/2) + sin(φ/2) cos(θ/2) sin(ψ/2) (3.5b)
z = cos(φ/2) cos(θ/2) sin(ψ/2) − sin(φ/2) sin(θ/2) cos(ψ/2) (3.5c)
w = cos(φ/2) cos(θ/2) cos(ψ/2) + sin(φ/2) sin(θ/2) sin(ψ/2) (3.5d)

3.3 Quadcopter Dynamical Model


This section presents the dynamical model of a quadcopter. The low-level control
commands for a quadcopter is the thrust of each rotor. The rotor thrusts can be
16 3 Theory

modelled more thoroughly, but in this thesis they are simplified to the orientation
of the uav as well as a total thrust vector along the z-axis of the body-frame. The
dynamical model has the purpose of describing the resulting dynamics from the
orientation and thrust vector.
Let (x, y, z) be the position of the uav in the {W}-frame, (φ, θ, ψ) the roll,
pitch and yaw angle of the uav with respect to the {W}-frame. Furthermore, T
denotes the total thrust generated by the rotors, i.e. the magnitude of the force
vector propelling the uav along the direction of the z-axis in the {UAV}-frame. A
second force vector affecting the uav is from the force of gravity. Mahony et al.
[19] present a model of the resulting dynamics from the two force vectors as the
combined accelerations expressed in the {W}-frame. The model is

ẍ  0   0 
     
 0  W
m   = m   + RUAV  0  ,
ÿ 
(3.6)
 
z̈ −g T
     

where m is the mass of the quadcopter, g is the gravitational acceleration and


WR
UAV the rotation matrix from {UAV} to {W}, which is given by
 
cθ cψ sφ sθ cψ − cφ sψ cφ sθ cψ + sφ sψ 
W sφ sθ sψ + cφ cψ
RUAV =  cθ sψ cφ sθ sψ − sφ cψ  . (3.7)
 
 −s
θ sφ cθ cφ cθ 

By inserting (3.7) into (3.6) and dividing by m, the resulting model is


     
ẍ  0  cφ sθ cψ + sφ sψ  T
ÿ   0  c s s − s c 
  =   +  φ θ ψ φ ψ  m . (3.8)
z̈ −g cφ cθ
    

3.4 Kalman Filter Theory


When utilising measurements from multiple sensors with varying characteristics
such as uncertainty and update rate, it is not trivial to combine the sensor data.
One way of combining the data is with a Kalman filter [20]. The Kalman filter
is a recursive algorithm that estimates several unknown variables, called states,
with a multivariate normal distribution, which is described with a mean µ and a
covariance matrix Σ. The filter estimates the states using sensor measurements, a
measurement model and a dynamic model. The Kalman filter assumes a dynamic
and measurement model where the states enter as linear arguments. However,
that is not the case for most real world systems.
The extended Kalman filter (ekf) is an extension of the Kalman filter that
handles nonlinear dynamic and measurement models [21]. The key idea is to
linearize the nonlinearities. This can be done in several ways, but a common
approach is using first order Taylor series expansions.
Consider the nonlinear dynamical model g0 and the nonlinear sensor mea-
surement model h0 with zero-mean additive white Gaussian noise (awgn), the
3.4 Kalman Filter Theory 17

combined expressions are defined as

g(xt−1 , ut ,  t )) = g0 (xt−1 , ut ) +  t ,  t ∼ N (0, Qt ) , (3.9)


h(xt , δ) = h0 (xt ) + δ t , δ t ∼ N (0, Rt ) . (3.10)

Using the first order Taylor expansion, µ and Σ are estimated with the following
recursive scheme

µ̂t = g0 (µ̃t−1 , ut ) (3.11a)


Σ̂t = Gt Σ̃t−1 GtT + Qt (3.11b)
St = Ht Σ̂t HtT + Rt (3.11c)
Kt = Σ̂t HtT St−1 (3.11d)
µ̃t = µ̂t + Kt (zt − h0 (µ̂t )) (3.11e)
T T
Σ̃t = (I − Kt Ht )Σ̂t (I − Kt Ht ) + K RK (3.11f)

where S denotes the innovation covariance, K the Kalman gain and

dg0 (x, u)
Gt = x=µ̂t , u=ut
(3.12a)
dx
dh0 (x)
Ht = . (3.12b)
dx x=µ̂t

The recursions start from the initial guess of µ̂0 and Σ̂0 . The equations (3.11a)
and (3.11b) are called the prediction step. These equations propagate the system
dynamics through time using the dynamical model. This step predicts the cur-
rent state of the system (µ̂t , Σ̂t ). During the prediction step, the uncertainty in
the estimate grows, i.e. the covariance matrix gets larger.
The second step incorporates the measurements in the estimate and is called
the measurement step. In this step the prediction is corrected by the measure-
ment and the confidence in the estimate increases. The measurement step con-
sists of (3.11c)-(3.11f) and returns the estimates (µ̃t , Σ̃t ). The final step in (3.11f)
is expressed in the Joseph form. The Joseph form is equivalent to the Σ̃t calcula-
tion of the standard ekf, however, it is stated in a numerically stable expression
[22].

3.4.1 Measurement Outliers


Real sensor data often contains measurement outliers that contain large errors.
These outliers should not be used in the filter. One way of detecting measure-
ment outliers is with the Mahalanobis distance which is used to form the nor-
malised innovation squared (nis) test [23]. The nis is computed as the squared
Mahalanobis distance of the measurement error with the innovation matrix S,
equation (3.11c), as normaliser:

nis = (zt − h(µ̄t ))T S −1 (zt − h(µ̄t )) . (3.13)


18 3 Theory

The nis is χ2 -distributed, where the degrees of freedom are equal to the number
of measurements in the measurement vector zt . A confidence level of the χ2 -
distribution is used to classify the measurements as either a valid sample or as an
outlier.

3.4.2 Measurement Information Gain


When considering the information gain of a measurement the Cramér-Rao lower
bound (crlb) can be used. The crlb is calculated from a sensor measurement
model, and returns a lower bound for the covariance matrix of the estimated
states, i.e.
Σ ≥ CRLB , (3.14)
assuming an unbiased estimator. The lower bound gives an indication of how
accurately the states can be estimated. To calculate the crlb, consider a linear
sensor measurement model h0 (x) = H x with a constant covariance matrix Rt = R.
As described in [24], the crlb is

CRLB = (H T R−1 H)−1 (3.15)

3.5 Position Trilateration


As mentioned earlier, the ekf algorithm requires an initial guess. In this work,
the ekf will be used to estimate the position of the uav relative to the ugv. The
initial guess for the relative position can be obtained by triangulating the range
data from the uwb radios. In [25] a number of methods are investigated to es-
timate the position of a radiating source using range measurements from m bea-
cons, which resembles the uwb radios in this work. Beck et al. [25] assume that
the range measurements from beacon i has the following form

ri = kx − ai k2 + i , (3.16)

where x ∈ Rn is the position to be estimated, ai ∈ Rn is the position of the ith


beacon and i is awgn. One of the examined methods is an unconstrained least
square approach, which has the following form

min kAy − bk2 , (3.17)


y∈Rn+1

where  2 2
 −2a1T
 
1  r1 − ka1 k2 
A =  ...
 ..  , b = 
 .. 
 . (3.18)
.  . 
−2aTm kam k22
  
1
 2 −
rm
The solution is then given by

y∗ = (AT A)−1 AT b . (3.19)

The first n terms of y∗ is the estimate of x.


3.6 Camera Model 19

3.6 Camera Model


A pinhole camera model is used to model the camera attached to the uav. The
pinhole model assumes that a 3D-scene is projected on the image plane via a
perspective transformation, see Figure 3.4. The model consists of intrinsic and
extrinsic parameters [26]. The extrinsic parameters describe the spatial relation
between the camera and the objects observed in the camera frame, i.e. a transla-
tion and a rotation. The intrinsic parameters describe the relation between world
coordinates and pixel coordinates in the image data. The intrinsic parameters
are associated with the specific camera and its settings. Therefore, the intrinsic
parameters can be estimated once and then reused.

f Z

Figure 3.4: Pinhole model illustration where f is the focal length and Z is the
distance to the camera.

The goal of the camera model is to map global coordinates [X, Y , Z]T to pixel
coordinates [u, v]T . The first step in the model is to transform the global coordi-
nates to the camera frame using the extrinsic parameters as follows

x X 
   
y  = R  Y  + t , (3.20)
   
z Z

where [x, y, z]T are the coordinates in the camera frame. The matrix R and the
vector t describe a rotation and a translation respectively, which together encapsu-
lates the extrinsic parameters. The next step is to project [x, y, z]T to a normalised
image plane using a perspective transformation as follows

x0 = x/z, y 0 = y/z. (3.21)

The next step in the model is to account for radial and tangential distortions with
the plumb bob model [27]
h i h i
x00 = 1 + k1 r 2 + k2 r 4 + k3 r 6 x0 + 2p1 x0 y 0 + p2 r 2 + 2x02 (3.22a)
h i h i
00 2 4 6 0 2 02 0 0
y = 1 + k1 r + k2 r + k3 r y + p1 r + 2y + 2p2 x y (3.22b)
q
r = x02 + y 02 , (3.22c)
20 3 Theory

where k1 ,k2 and k3 are radial coefficients and p1 and p2 are tangential distortion
coefficients. The pixel coordinates u and v are finally obtained as

u =fx · x00 + cx (3.23a)


00
v =fy · y + cy , (3.23b)

where fx and fy describe the focal lengths in pixel coordinates. The constants cx
and cy describe the centre of the image in pixel coordinates. To summarise, the
intrinsic parameters are: k1 , k2 , k3 , p1 , p2 , fx , fy , cx and cy .

3.7 Proportional-Integral-Derivative Control


Proportional-integral-derivative (pid) control is a feedback control approach that
is widely used in industrial control systems [28]. Assume that a system with input
u(t) and output y(t) is to be controlled. With a desired set point r(t), the goal of
the controller is to generate an input u(t) that makes the output y(t) follow the
set point, i.e. y(t) = r(t). A feedback controller controls the input u(t) based on
the error signal e(t), which is the difference between the set point and the output,
i.e. e(t) = r(t) − y(t).
The pid-controller consists of three parts:
• P – Accounts for the current error between r(t) and y(t) and generates an
output that is proportional to the error.
• I – Accounts for past values of the error signal and generates an output that
is proportional to the integral of the error, which therefore counteracts a
static error.
• D – The generated signal is proportional to the derivative of the error and
therefore accounts for estimated future values of the error signal, i.e. re-
duces overshoots.
The various parts have the corresponding proportional gains (Kp , Ki , Kd ). The
combined control signal is:

Zt
de(t)
u(t) = Kp e(t) + Ki e(τ)dτ + Kd (3.24)
dt
0

Figure 3.5 presents a pid control feedback system.


A pid controller without the I-part (Ki = 0) is called a pd-controller.

3.8 Proportional Navigation Control


Proportional navigation (pn) is a guidance law [29]. Guidance laws are control
laws with the goal of navigating a pursuer towards a target and reach it as fast
as possible. The pn law is based upon the idea that the pursuer eventually will
3.9 Barometric Formula 21

Kp e(t)

r(t) Rt u(t) y(t)


Ki e(τ)dτ System
− e(t) 0

de(t)
Kd dt

Figure 3.5: A PID control feedback system.

collide with the target if the line of sight (los) vector between them remains
constant and that the distance is decreasing. This is achieved by generating an
acceleration target a⊥ for the pursuer, which is perpendicular to the los-vector,
such that when the target and the pursuer are moving, the los-vector remains
the same.
Let the position and velocity of the pursuer be pP and vP . Additionally, let the
position and velocity of the target be pT and vT . Then, let the relative position
and velocity between the vehicles be prel = pT − pP and vrel = vT − vP . The
acceleration a⊥ is then computed as follows
prel prel × vrel
a⊥ = −λ|vrel | ×Ω, Ω= , (3.25)
|prel | prel · prel
where λ is a positive gain parameter controlling the rate of rotation. See Figure
3.6 for an illustration.

3.9 Barometric Formula


Both the uav and the ugv are equipped with a barometer with the purpose of es-
timating the relative altitude between the vehicles. According to [30] the relation
between relative altitude and pressure is described by the following equation:
 !− LR −1 
T  PA g 
zrel = zA − zB =   , (3.26)
L PB 

where T is the air temperature, zrel is the difference in altitude between point A
and point B, while PA and PB is the pressure at A and B respectively. Furthermore,
L is the temperature lapse rate7 , g is the gravitational acceleration and R is the
real gas constant for air. The equation is valid up to an absolute altitude of 11 km
and can be inverted such that pressure is obtained from an altitude. The values
of L, R and g are given in [31] which presents the Standard Atmosphere Mean
Sea Level Conditions. From [31] the mean temperature at sea level T̄ as well as
the mean pressure at sea P¯s . The constants are presented in Table 3.1.
7 The rate at which the temperature in the atmosphere falls with altitude.
22 3 Theory

Target

Pursuer

Figure 3.6: Illustration of how the acceleration a⊥ is generated from the PN


control law. prel and vrel are the relative position and the relative velocity
between the pursuer and the target. Ω describes the rotation of the los
vector and is perpendicular with prel and vrel . The acceleration a⊥ is in the
direction of the cross product from prel and Ω.

Table 3.1: Barometric constants.

L R g T̄ P¯s
Value -0.0065 287.04 9.80665 288 101.325
K m2 m
Unit m K · s2 s2
K kPa
Estimation System
4
The goal of the estimation system is to provide an estimate of the relative position
and velocity between the uav and the ugv. This chapter starts with an overview
of the estimation system in Section 4.1, the underlying filter structure used for
the estimation is described in Section 4.2 and a detailed description of the sensors
that are used by the estimation system is presented in Sections 4.4–4.7.

4.1 Estimation System Description


The overall goal of the estimation system is to utilise sensor data from the sen-
sors that are mounted on the uav and ugv to estimate the relative position and
velocity between the vehicles. Furthermore, the estimation system should be able
to provide an estimate when the vehicles are over 50 meters apart. At those dis-
tances the estimate can be coarse, but as the uav flies closer to the ugv, a higher
accuracy is required. The most critical part in terms of the accuracy requirement
for the estimation system is when the uav is descending towards the ugv. The
sensors used by the estimation system are described in Sections 2.1.2 and 2.2.2.
The measurements from these sensors are:

• Attitude measurements of the uav and ugv from the flight control unit as
well as the imu on the ugv, see Section 4.3.

• Distance measurements between the tag and the four anchors in uwb sen-
sor network, see Section 4.4.

• Relative position measurements from the camera system, see Section 4.5.

• Accelerometer measurements from the imu on the uav and ugv, see Sec-
tion 4.6.

23
24 4 Estimation System

• Barometer measurements from the imu on both vehicles, see Section 4.7.

The sensors are sampling data at different frequencies, the sampled data has
different accuracy and the sensors measure different quantities. In order to ac-
count for these differences, and combine the sensor measurements into estimates,
an ekf is used. The ekf takes the sensor data and outputs an estimate of the rel-
ative position and velocity of the uav with respect to the ugv in the {W}-frame.
An overview of the estimation system is presented in Figure 4.1

Figure 4.1: Overview of the estimation system.

4.2 Extended Kalman Filter


The ekf is a filter that estimates the states in a process based on a model as well
as measurements of the process, see Section 3.4. The process examined in this
case is the relative movement of the uav with respect to the ugv. The estimated
states are the relative position and velocity of the uav with respect to the ugv,
expressed in the {W} coordinate frame. The state vector x is therefore defined as
h iT
x B x y z ẋ ẏ ż . (4.1)

The input of the model, u, is the relative acceleration between the uav and ugv.
The relative acceleration is modelled with additive white Gaussian noise (awgn)
as follows h iT
u = ẍ ÿ z̈ + u , u ∼ N (0, Pu ) (4.2)
The dynamical model used in the filter is based on the model used in [8]. How-
ever, instead of estimating the vehicles’ individual movements, the relative move-
ments between the vehicles are estimated. Additionally, this work assumes that
the acceleration enters as an input instead of a state. The dynamical model is

xt+1 = Gxt + But + ,  ∼ N (0, Q) (4.3)


 5
Ts4 

 Ts
" # " 2#
1 Ts Ts
G= ⊗ I3x3 , B = 2 ⊗ I3x3 , Q = q ·  T204 8  ⊗ I3x3
Ts3 
(4.4)
0 1 T s s 
8 3
4.2 Extended Kalman Filter 25

where Ts is the period time of the filter, q a tuning constant and ⊗ the Kronecker
product. The model assumes that the derivative of the acceleration is a zero-
mean Gaussian white noise process with the same variance in all direction, i.e.
[x(3) y (3) z (3) ]T ∼ N (0, q · I3x3 ).
From (3.11a) and (3.11b) follows that the prediction step consists of the fol-
lowing equations:

µ̂t = Gµt−1 + But (4.5a)


T T
Σ̂t = GΣt−1 G + Q + BPu B (4.5b)

The prediction step is usually executed with a frequency equal to the frequency
of the sensor with the highest sampling rate. Therefore the frequency of the
prediction step is set at 50 Hz, since the update rate of the accelerometer sensor
is 50 Hz, see Section 4.6. The time period is therefore Ts = 1/50 s.
With the sensors measurement model in (3.10), (3.11c) – (3.11e) gives the
following measurement step

Kt = Σ̂t HtT (Ht Σ̂t HtT + R)−1 (4.6a)


µ̃t = µ̂t + Kt (zt − h(µ̂t )) (4.6b)
T T
Σ̃t = (I − Kt Ht )Σ̂t (I − Kt Ht ) + K RK (4.6c)

dh(x)
where Ht = dx x=µ and Kt is the Kalman gain. Equation (4.6c) is written in
t
Joseph form, see Section 3.4. The measurement step will be different for each
sensor type since they have different models and measure different states. The
sensor models used in the filter are described in the sections below.
As stated in Section 3.4.1, a sensor measurement might be an outlier and
should in that case be removed. In order to detect and remove invalid sensor
data the nis test is used. The nis is calculated as described in (3.13) and is ap-
proximaly χ2 -distributed. A 95%-confidence level is used to classify the mea-
surements as an outlier or a valid measurement.
The initial guess of the z-position is taken as a mean from a series of altitude
estimates provided by the barometers, see Section 4.7. The mean altitude value
is denoted z0 . To calculate the initial guess in the x- and y-position (x0 , y0 ), two
series of uwb distance measurements are taken from each uwb anchor. From the
measurements, the position is trilaterated using the theory in Section 3.5, only
considering the horizontal position. The initial relative velocity is set to zero.
Hence, the initial guess µ0 is

 x0 
 
 y 
µ0 =  0  . (4.7)
 
 z0 
03×1
 

The initial guess of the covariance was determined ad hoc to be

Σ0 = 0.1 I6×6 . (4.8)


26 4 Estimation System

4.3 Attitude Measurements


The attitude of the vehicles is estimated with the flight control unit on the uav
and the imu on the ugv. The attitude data is used to create a rotation matrix
W R , which maps vectors in a body fixed coordinate system to the {W} coordinate
b
frame as follows

xW = W Rb xb , (4.9)
where xb ∈ R3 is a vector expressed in a body fixed coordinate system and xW ∈
R3 is the vector expressed in the {W} frame.

4.4 Ultra-Wide Band Sensor Network


The ultra-wide band (uwb) sensor network consists of one uwb tag mounted on
the uav and four uwb anchors mounted on the ugv, see Section 3.1. For con-
venience a definition is made: An anchor-tag pair designates an anchor and the
tag, which is included in all pairs. With this definition there are four anchor-tag
pairs in the system. The anchor and the tag in an anchor-tag pair communicates
with each other; and the Euclidean distance between the anchor and the tag is
measured. Distance measurements between each anchor-tag pair are provided
at 10 Hz. Hence, the tag receives 40 measurements per second. These measure-
ments are forwarded to the ekf with the same rate. An overview of the uwb
sensor network can be seen in Figure 4.2.

UWB Sensor Network

Tag-
Anchor- Anchor
Anchor-
Tag Pair 1 Pair
Tag 22
Pair

Anchor 1 Anchor 2
Range 1 Range 2
Position Position Ranges

Anchor Positions EKF


Transformation
Anchor 3 Anchor 4
Range 3
Position Range 4
UGV Orientation

Position
Tag-
Anchor- Anchor
Anchor-
Tag Pair 3 Pair
Tag 44
Pair

Figure 4.2: Overview of the uwb sensor network. Each anchor-tag pair re-
turns the range between them as well as the position of the anchor in the
{UGV}-frame. The anchor positions are transformed to {W}-frame and sent
with the ranges to the ekf.
4.4 Ultra-Wide Band Sensor Network 27

4.4.1 Sensor Model


Let the anchor in each anchor-tag pair correspond to a numerical index i ∈ (1, 2, 3, 4).
The position of anchor i is known in terms of an offset expressed in the {UGV}-
frame, labelled as δUGV
i . Using the rotation matrix W RUGV , the offset is trans-
formed to the {W}-frame:
δW
i =
W
RUGV δUGV
i . (4.10)
Then, let the position of anchor i in the {W}-frame be pW
i which is expressed as

pW W W
i = pUGV + δ i , (4.11)

where pWUGV is the position of the ugv in the {W}-frame. The distance measure-
ment di between one of the anchors and the tag (mounted on the uav) can then
be expressed as follows
d i = pW W
UAV − pi 2 , (4.12)

where pWUAV is the position of the uav in the {W}-frame. Since x1:3 = pUAV − pUGV ,
the distance di is
di = x1:3 − δW
i 2 . (4.13)
i
However, the distance measurement zUWB from anchor-tag pair i is not without
i
error. In [32], Gou et al. assumed a linear model between the measurement zUWB
and true distance di , i.e.
i
di = ai zUWB + bi , (4.14)
where ai is a scaling factor and bi is the bias. Additionally, assuming zero-mean
awgn, the sensor model of anchor-tag pair i is

i i
x1:3 − δW
i 2
− bi i
zUWB = hUWB (x) + = + ,  ∼ N (0, RUWB ), (4.15)
ai
i
where  is error and RUWB the (1 × 1) error covariance matrix of anchor-tag pair i.
i
The parameters of each model (ai , bi , RUWB ) was determined empirically, which
i
is described in the following section. The Jacobian of hUWB (x) is
i
i dhUWB (x) 1 h i
i
HUWB (x) = = x − δW 01x3 / x1:3 − δW . (4.16)
dx a 1:3 i 2

4.4.2 Model Parameter Tests


This section aims to present how the parameters of the sensor model, see (4.15),
were determined for each anchor-tag pair. Tests were conducted to evaluate the
performance and the characteristics of the range measurements from each anchor-
tag pair. In these tests, the measured range was compared to a ground truth based
on a measurement with a laser measuring tool. These tests were executed for each
anchor-tag pair. For convenience sake, the four anchor-tag pairs are labelled P1,
P2, P3 and P4 in this section.
28 4 Estimation System

Each test consisted of an anchor-tag pair (one uwb anchor and the uwb tag),
and a laser measuring tool, which can measure distances with sub-centimetre pre-
cision. However, with the test setup used, unevenness of the ground could cause
errors in the centimetre range. The test was conducted in open space outside to
reduce effects from multipath propagation. The tag and anchor were each placed
on top of cardboard boxes to reduce reflections from the ground, see Figure 4.3.

Figure 4.3: Cardboard box with uwb tag to the left. Cardboard box with
uwb anchor to the right.

The test setup can be seen in Figure 4.4. The ground truth d is calculated as

d = dl − doffset . (4.17)

where dl is the distance from the laser measuring tool and doffset is the distance
between the laser and uwb tag. To achieve an adequate sensor model, the setup
was repeated at 20 different distances, 1-30 m. At each distance the true distance
d was measured and 100 samples of the range measurement r were collected from
the uwb tag. The measurements were then repeated for all anchor-tag pairs. P1
was tested an additional time to examine if the model parameters vary between
tests.
As described in (4.15), the relation between the range measurement r and
ground truth d is

d−b
r= + ,  ∼ N (0, RUWB ) . (4.18)
a

The parameters (a, b) can be estimated with least squares regression. To estimate
4.4 Ultra-Wide Band Sensor Network 29

Figure 4.4: Model parameter test setup. The uwb anchor-tag pair measures
the distance r between them. The laser measuring tool measures the refer-
ence distance to the cardboard box dl , which is used as an estimate of the
ground truth.

(a, b) from the measurements from a test, the following matrices are formed

 1  1
 r1 1  d 


 . ..   . 
 .  . 
 . .  . 


 1   1 
r 1   d 
 100
" #
a  
Ar =  ... ..  ,
.  xpar = , yd =  ...  (4.19)
 20 b  20 
 r 1 
 d 
 1  
 . .. 
  . 
 .  . 
 . .   . 
 20  20 
d

r100 1

where rjk is the j:th sample from the range measurements at distance d k . The
parameters are then calculated as

xpar = (ATr Ar )−1 ATr yd . (4.20)

The parameters were estimated for each of the five tests, see Table 4.1. The
bias b can vary between tests, with estimated bias values between 1 − 10 cm. Fur-
thermore, examining the separate tests with anchor-tag pair P1 the results are not
consistent over time. We conclude that estimating the model of each individual
anchor-tag pair will not be a fruitful endeavour.
30 4 Estimation System

Table 4.1: Estimated parameters a and b from the data from each test.

anchor-tag pair a b
P1, test 1 1.0030 0.0820
P1, test 2 1.0043 0.0130
P2 1.0035 0.0348
P3 1.0033 0.0659
P4 1.0015 0.1003

However, the pattern of the estimated parameters is a positive bias b as well


as a scaling parameter a around 1.00. We therefore conclude that a linear model
could still improve the measurements from the uwb-sensors. The solution is to
estimate a and b with the data from every test. The resulting parameters are:

a = 1.0032, b = 0.0580 (4.21)

To estimate the covariance matrix RUWB , (4.20) is rewritten as


d−b
=r− . (4.22)
a
Since RUWB is a 1 × 1 covariance matrix, it is simply the variance of . Therefore,
RUWB is estimated as
N
1 X d −b
RUWB = ri − i = 0.0015 , (4.23)
N −1 a
i=1

where N is the total number of measurements.


To illustrate the resulting sensor model, the error  from each sample versus
the true distance at the samples is scatter-plotted. The line (a−1)r +b representing
expected error based on the model is plotted as well. The result is presented in
Figure 4.5.

4.4.3 Anchor Configuration


The placement of the uwb anchors on the ugv has an impact on the informa-
tion gain from the sensors. Therefore, the placement of the anchors was carefully
thought out so as to maximise the positional information gained from a measure-
ment. This section describes how the anchor configuration was chosen, yielding
a favourable geometry but still adhering to the practical limitations of the ugv
landing pad.
As described in Section 3.4.2, the crlb can be used to indicate the information
gain of a measurement, with a lower crlb indicating more information. There-
fore, different anchor configurations are compared based on their crlb. The
crlb can be calculated from a linear sensor model, see (3.15).
As described in Appendix A.3, the measurements from all anchor-tag pairs
are sent simultaneously. They are merely handled separately in the ekf. There-
fore, a sensor model where all measurements are concatenated is used. The
4.4 Ultra-Wide Band Sensor Network 31

0.3

0.25

0.2

0.15
error (m)

0.1

0.05

-0.05 sample error


(a-1)d + b
-0.1
0 5 10 15 20 25 30
distance (m)

Figure 4.5: The sample errors for all model parameter tests plotted against
distance.

model is derived from (4.15),


 1   1 
zUWB  hUWB (x)
z 2  h2 (x)
zUWB =  UWB 3
 =  UWB  + e,
 h3 (x) e ∼ N (0, R4×4 ) , (4.24)
zUWB
 4   UWB
  
zUWB h4UWB (x)

where R4×4 = RUWB I4×4 is the covariance matrix of the measurement error e and
RUWB is the variance from a uwb range measurement determined in the previous
subsection. The model in (4.24) is nonlinear and is therefore linearized using the
first order term from the Taylor series expansion at a state xj . The resulting model
is  1
HUWB (xj )

 2
H (xj )

y =  UWB3  x + e . (4.25)
HUWB (xj )
 4 
HUWB (xj )
When evaluating different anchor configurations, it is not sufficient to com-
pare the crlb at a single state xj . However, a number of states can provide a
statistically significant benchmark. Since the uwb measurements only give posi-
tional information, the velocities are irrelevant and set to zero. Also, since the
uwb sensor network will be used to estimate positions above ground (z ≥ 0),
negative z-positions are not studied. In order to achieve a statistically significant
32 4 Estimation System

benchmark, points are randomly placed on a semi-sphere with a fixed radius r. In


this case N = 10000 points are placed out. Figure 4.6 illustrates the semi-sphere
where r = 10 m. For each corresponding state xj , CRLB(xj ) is calculated. The
element CRLB(xj )1:1 in the crlb-matrix is the lower bound of the variance of the
estimated x-position. The lower bound CRLB(xj )1:1 is saved for each state and a
mean CRLBx is calculated. The mean is an indication of the positional informa-
tion in the x-direction at the current semi-sphere radius r is. The crlb-means for
the estimation of the y- and z-position are calculated as well.

Figure 4.6: 10000 points randomly placed on a semi-sphere with a radius


r = 10 m.

Before deciding on an anchor configuration, the limitations of the landing


platform of the ugv needs to be considered. The shape of the landing platform
is described in Section 2.2. Initial tests in MATLAB indicated that a larger convex
hull bounded by the anchor positions would lead to a better estimation. There-
fore, the anchors should be placed at the extremities of the landing pad, i.e. on
the corners of the square. The only variation is in the height of the anchors, when
compared to the base of the landing pad. From these specifications, three differ-
ent potentially interesting configurations were chosen: all anchors on the base of
the pad (Conf 1), three on the pad and one placed at the top of a beam (Conf 2)
and lastly two on the pad and two on top of a beam (Conf 3). The corresponding
anchor positions of Conf 1, 2 and 3 as well as their convex hulls can be seen in
Figures 4.7a, 4.7c and 4.7e.
To compare configurations, the crlb-means CRLBx , CRLBy and CRLBz were
calculated for semi-sphere radii between 0.1 and 50 m. The results from each
configuration are shown in Figures 4.7b, 4.7d and 4.7f.
4.4 Ultra-Wide Band Sensor Network 33

12

10

CRLB [m 2 ]
8

0
10 20 30 40
Semi-sphere radius [m]
(a) Conf 1. (b) crlb-means over distance, Conf 1.
12

10
CRLB [m 2 ]

0
10 20 30 40
Semi-sphere radius [m]

(c) Conf 2. (d) crlb-means over distance, Conf 2.

12

10
CRLB [m 2 ]

0
10 20 30 40
Semi-sphere radius [m]

(e) Conf 3. (f) crlb-means over distance, Conf 3.

Figure 4.7: The points and convex hulls corresponding to each anchor con-
figuration. For each configuration, the crlb-means are also shown.
34 4 Estimation System

From Figures 4.7b, 4.7d and 4.7f it is observed that the CRLBy is basically
identical with CRLBx , i.e. the crlb-mean is identical in each horizontal direc-
tion. Another observation is that the crlb-means seem to grow quadratically
with distance. When studying Figure 4.7b it is clear that Conf 1 leads to essen-
tially no positional information in the z-direction at any radius. When comparing
Figures 4.7b and 4.7d, it can be seen that Conf 2 leads to slightly more positional
information of the horizontal positions. However, the positional information of
Conf 2 in the z-direction is worse when compared to Conf 3. The conclusion is to
choose Conf 3 as anchor configuration.
As stated previously, the crlb-means seem to grow quadratically with radius
r. To study Conf 3 further, we define the azimuth angle and inclination angle in
the spherical coordinate system as α ∈ [−π, π] and β ∈ [0, π], respectively. The
radius is fixed to r = 10 m, and for each point, α and β are calculated. From the
calculations as well as the crlb of each point, the relation between α and crlb
as well as β and crlb is shown in Figures 4.8 and 4.9.
When analysing Figure 4.8, we first notice the spread of points for each value
of α, caused by the varying β-angle. The opposite behaviour can be seen in
Figure 4.9. However, there is still a clear pattern. When the α points in the x-
direction, more information about the x-position is gained. This is the same for
the y-position. A similar behaviour can also be seen in Figure 4.9. When β points
more in the z-direction, the positional information of the z-position is better. Fur-
thermore, we conclude that α has no effect on the position information from the
z-position.

CRLB x-position
CRLB [m 2 ]

0.05

0
-4 -2 0 2 4
[rad]
CRLB y-position
CRLB [m 2 ]

0.05

0
-4 -2 0 2 4
[rad]
CRLB z-position
CRLB [m 2 ]

0.5

0
-4 -2 0 2 4
[rad]

Figure 4.8: crlb over azimuth angle α. Calculated from 10000 points ran-
domly placed on a semi-sphere with a radius r = 10 m.
4.5 Camera System 35

CRLB x-position

CRLB [m 2 ]
0.05

0
0 0.5 1 1.5
[rad]
CRLB y-position
CRLB [m 2 ]

0.05

0
0 0.5 1 1.5
[rad]
CRLB z-position
CRLB [m 2 ]

0.5

0
0 0.5 1 1.5
[rad]

Figure 4.9: crlb over inclination angle β. Calculated from 10000 points
randomly placed on a semi-sphere with a radius r = 10 m.

4.5 Camera System


This section aims to describe the camera system, which estimates the position
of the ugv with respect to the uav. The estimated position is expressed in the
{W} coordinate frame and the image data is captured with a calibrated camera.
The camera system can only provide estimates when the uav is above and in a
close proximity to the ugv since the camera is facing downwards. This section
contains information about how the position estimation is executed and how the
camera is calibrated. It also presents performance results of the camera system,
using both a simulated and a real camera.
The camera system is divided into four separate parts:
• A calibrated downwards facing camera mounted on the uav.
• A fiducial detection algorithm that detects and estimates the position of a
fiducial marker in image data.
• A fiducial marker with known dimensions mounted on the ugv.
• A coordinate transform that transforms positions from the camera’s refer-
ence frame to the {W}-frame.
The fiducial detection algorithm processes images from the camera. If the marker
mounted on the ugv is detected in an image its position is estimated. The marker’s
estimated position is expressed in the camera’s reference system and needs to be
transformed to global coordinates, which is done with a coordinate transform.
36 4 Estimation System

See Figure 4.10 for an overview. These four components are each described in the
following four sections.

Camera System

Fiducial
Marker

Marker Relative
EKF
Images Detection Position Coordinate Position
Camera
Algorithm Transform

Figure 4.10: Camera system overview.

4.5.1 Camera
As described in Section 2.1, a camera is mounted on the uav. The camera is
modelled as a pinhole camera, see Section 3.6. The parameters of the pinhole
model were determined through a calibration procedure.
The calibration procedure is explained in [33]. In essence, the procedure in-
volves holding a checkerboard with known dimensions in front of the camera.
The checkerboard is moved between different poses while the camera is taking
pictures of it. The checkerboard contains 9x7 squares, however, it is the 8x6 in-
terior vertices that are being used by the calibration algorithm. When enough
images of the checkerboard have been taken, the calibration algorithm solves an
optimisation problem in order to obtain an estimate of the camera parameters.

4.5.2 Fiducial Detection Algorithm


A fiducial detection algorithm is an estimation method that estimates the posi-
tion of a fiducial marker relative to a camera. The research area regarding fidu-
cial algorithms is widespread and there are many open-source algorithms that
are robust and well tested. One such algorithm is called AprilTag, which is used
in this project. AprilTag was created by the APRIL Robotics Laboratory at The
University of Michigan. The system was initially presented in [34] and further
developed in [35] and [36]. AprilTag was chosen because it has a ros implemen-
tation and could therefore be easily integrated with the rest of the system. Initial
tests in simulation also proved that the AprilTag algorithm had adequate perfor-
mance.

4.5.3 Fiducial Marker


A fiducial marker is an artificial object with a unique design, which can be iden-
tified in image data and used as a reference. In the specific case of the camera
4.5 Camera System 37

(a) Original tag. (b) Tag showing the different regions.

Figure 4.11: Description of the regions of a tag from the TagCustom48h12-


family. The green area corresponds to the part of the tag not used in the
detection. The part of the tag within the red lines is the estimation area.

system the fiducial marker is used to estimate the ugv’s position with respect to
the camera mounted on the uav. This estimation is possible since the size of the
fiducial marker is known. The marker is mounted on top of the ugv, see Section
2.2.
AprilTag can detect a variety of tags, known as tag families8 . Variations be-
tween families can come in size, appearance and other properties. A tag family
contains multiple tags, each identified by their individual ID (positive integer).
The tag family used in this project is called TagCustom48h12, which was designed
in [37]. A tag can usually be divided into two parts: One used for the identifica-
tion and used for the position estimation, formally called the estimation area. The
tags from the TagCustom48h12-family also contain a space in the centre which is
not used in neither the identification nor the estimation process, see Figure 4.11.
This enables for the creation of a recursive tag, which herein is defined as:
A recursive tag is a tag with an inner tag placed in the centre. The inner tag
can also hold another tag placed in its centre, increasing the level of
recursion.
The advantage of recursive tags is that tags with different sizes can be com-
bined into a new tag with an extended range of detection compared to the indi-
vidual tags. By range of detection the authors refer to the interval between the
minimum and maximum distance at which a tag can be detected with a camera.
One of the governing quantities for the range of detection is the size of the tag,
since it determines at which distances it can be seen in in the fov of the camera.
When a smaller tag is placed within a larger tag, the combined tag can be seen
between the maximum distance of the larger tag and the minimum distance of
the smaller tag, given that there is no gap between the tags’ range of detection.
The recursive tag on the ugv is a combination of two tags in the TagCus-
tom48h12 family. The tags have IDs 0 and 1, see Figure 4.12a and 4.12b. The
8 A description of the different kinds of AprilTag families can be seen at:
https://april.eecs.umich.edu/software/apriltag.html
38 4 Estimation System

(a) Outer tag with ID 0. (b) Inner tag with ID 1. (c) Recursive tag.

Figure 4.12: Description of the recursive tag consisting of two tags combined
into one. The green area corresponds to the part of the tag not used in the
detection.

dimensions of the tags as well as their estimation areas are presented in Table 4.2.
The tag with ID 1 is placed inside the tag with ID 0 as explained in Figure 4.12.

Table 4.2: Tag descriptions.

Dimension Estimation area


Tag 0 1.4 × 1.4 m2 0.84 × 0.84 m2
Tag 1 0.252 × 0.252 m2 0.151 × 0.151 m2

4.5.4 Coordinate Transform


The coordinate transform layer transforms positions in the camera’s frame of ref-
erence to the {W}-frame. First, the estimated position from the AprilTag algo-
rithm, pcam , is transformed to the {UAV}-frame via the rotation matrix UAV Rcam .
Secondly, a rotation is required from the {UAV}-frame to the {W}-frame, where
the rotation matrix is denoted W RUAV . The coordinate transform layer therefore
performs the following transformation

pW = W RUAV UAV Rcam pcam , (4.26)

where pW is the estimated position from the AprilTag algorithm expressed in the
{W}-frame.

4.5.5 Error Model


The camera system’s performance was evaluated, both in simulations and exper-
imentally. First, the range of the camera system was evaluated in Gazebo, see
Figure 4.13. The camera was moved in a grid-like pattern in order to examine
where it could detect and estimate the positions of the AprilTags. Figure 4.14
shows that the combination of a small and a large tag allows an extended range
4.5 Camera System 39

of detection. The small tag has a slightly wider area of detection, which is moti-
vated by the fact that parts of the larger tag disappears out of the fov before the
small tag. The range of detection for the small tag was experimentally confirmed
using the camera mounted on the uav. The tag could be detected at distances up
to 8 m.

Figure 4.13: Illustration of the camera simulation environment. The camera


is mounted underneath the box and the AprilTag can be seen on the ground.
The AprilTag has the same dimensions as the tag on the ugv.

40 7

35
6

30
5

25
4
Height (m)

Height (m)

20

3
15

2
10

1
5

0 0
-30 -20 -10 0 10 20 30 -4 -2 0 2 4
Horizontal Direction (m) Horizontal Direction (m)

Figure 4.14: Detection range of the small and the large tag. The blue and
red line indicate the areas where the large and small tags can be detected,
respectively. The right figure is a zoomed-in version of the left figure.
40 4 Estimation System

The positioning accuracy was also examined with several simulations:

• For a given position and orientation in the simulation environment, 100


position estimates of the tag were sampled using the camera system. The
bias and the standard deviation are computed using the samples.
• Between each sample the camera is moved randomly in the horizontal plane.
The length of the movement is drawn from a uniform distribution. The lim-
its of the distribution correspond to the tag moving two pixel lengths. This
is done to make sure that the tag is not seen in the exact same pixels for
each sample.

First, the impact of the pitch angle of the camera is investigated. Figure 4.15
displays the bias and standard deviation when the pitch angle is varied for a typ-
ical position. The pitch angle mostly affects the accuracy of the z-estimate, and
the highest error occurs when the camera is parallel with the tag. The maximum
bias in the height estimate is approximately 10 cm for the large tag and 6 cm
when using the small tag. The same applies for the roll angle. In the subsequent
tests the camera is parallel with the tag to obtain a conservative estimate of the
error.

0.16 0.08
x x
y y
0.14 z 0.07 z

0.12
0.06

0.1
0.05
0.08
0.04
Error (m)

Error (m)

0.06
0.03
0.04
0.02
0.02

0.01
0

-0.02 0

-0.04 -0.01
-10 -5 0 5 10 15 20 -20 -10 0 10 20
Pitch Angle Pitch Angle

(a) Large tag, camera at position (0.5, 0, 5) (b) Small tag, camera at position (0.2, 0,
(m). 2) (m).

Figure 4.15: Bias and standard deviation when varying the pitch angle of the
camera relative to the tag. The angles vary between −17.3◦ to 17.3◦ , which
are the maximum allowed attitude, see Section 5.3. For the largest negative
pitch angles the tags were not entirely inside the fov of the camera.

The next test investigates the estimation quality when the distance along the
z-axis varies, and the results can be seen in Figure 4.16 where it is apparent that
the height estimate is affected the most. Figure 4.17 shows the bias and the stan-
dard deviation for the position estimate when the camera’s x-position varies at
4.5 Camera System 41

a constant height. The standard deviation and bias from the y-estimate is not
affected when the camera is moving in the x-direction. The bias and the standard
deviation in the x-direction grows as the relative x-distance increases. It has been
confirmed that the opposite applies when the y-position varies.
The highest altitude the uav flies at during the landing manoeuvre is 10 m.
At 10 m the bias in the z-direction for the large tag is around 20 cm and quickly
decreases as the altitude gets smaller. The bias in the horizontal plane is in the
order of a couple of cm throughout all the test. Comparing these biases to the
precision of the camera sensor placement, which is also in the order of a couple of
cm, it is concluded that they are of similar size. Therefore, the bias is negligible.

0.6 0.3
x x
y y
0.5 z
0.25 z

0.2
0.4

0.15
0.3
0.1
0.2
Error (m)

Error (m)

0.05
0.1
0
0
-0.05

-0.1
-0.1

-0.2 -0.15

-0.3 -0.2
0 5 10 15 20 0 1 2 3 4 5 6
Height (m) Height (m)

(a) Large tag. (b) Small tag.

Figure 4.16: Bias and standard deviation when varying the height of the
camera. The camera is parallel with the tag and the x and y position is 0.

However, a variance model is constructed for the camera system since it is


required by the ekf. New data is generated for the variance model where the
camera is moved to positions in the area spanned by the range of detection. Addi-
tional uncertainty is introduced in the attitude of the camera to emulate a more
realistic system. This uncertainty increases the variance of the position estimate,
but the bias remains the same. Linear regression is used to generate a variance
model for the x-,y- and z-estimate. From the previous results it is concluded
that the variance of the x-estimate is coupled with the altitude and the offset in
the x-direction. The analogous applies for the y-estimate. The variance of the
z-estimate is coupled with the horizontal distance and the altitude. The type of
regressors are chosen based on the shape of the variance surface. The regressors
can be seen in Table 4.3. An example of the variance surface and the prediction
of the corresponding model can be seen in Figure 4.18.
42 4 Estimation System

0.35 0.2
x x
y y
0.3 z 0.15 z

0.1
0.25

0.05
0.2
0
Error (m)

Error (m)
0.15
-0.05
0.1
-0.1

0.05
-0.15

0 -0.2

-0.05 -0.25
0 1 2 3 4 0 0.5 1 1.5 2 2.5
x (m) x (m)

(a) Large tag at a height of 10 m. (b) Small tag at a height of 5 m.

Figure 4.17: Bias and standard deviation when varying the x-position of
the camera at a constant height. The camera is parallel with the tag, the y
position is 0.

Table 4.3: Linear regressors for the variance in the x-,y- and z-direction. The
variable r represent the horizontal offset.

Variance Axis Regressors


x 1, x, z, x · z, x2 , z2
y 1, y, z, y · z, y2, z2
z 1, r 2 , z 2 , z 4
4.5 Camera System 43

Figure 4.18: Illustration of the variance surface when the x- and z-position
changes. The red stars indicate the prediction from the linear regression
model.
44 4 Estimation System

4.6 Accelerometer
The estimation system uses relative acceleration data as input in each time update
step. In order to obtain relative acceleration data, both the uav and the ugv
are equipped with an imu. The imu contains an accelerometer, which measures
acceleration in the vehicle’s local coordinate frame. The acceleration data is low-
pass filtered and down-sampled from 400 Hz to 50 Hz before being passed to the
ekf. An overview of the relative acceleration computation can be seen in Figure
4.19

Accelerometers

Accelerometer UAV Prefilter


+1 Relative
Acceleration
EKF
-1
Accelerometer UGV Prefilter

Figure 4.19: Overview of the acceleration data processing. The two imu’s
attached to the vehicles acquired acceleration measurements in 400 Hz. The
prefilter block represent the low-pass filtering and downsampling of the
measurements. After the prefilter block the relative acceleration is com-
puted and passed to the ekf.

4.7 Reference Barometer


A barometer measures the pressure in the atmosphere, which in turn can be
converted to an altitude and vice versa, see Section 3.9. See Figure 4.20 for an
overview of the relative altitude estimate. The relative altitude measurement is
used to aid the position estimate from the UWB sensors.

Reference Barometer

Barometer UGV Relative


Altitude Altitude
Estimate EKF
Barometer UAV

Figure 4.20: Barometer data from the uav and the ugv is to compute an
estimate of the relative altitude, which in turn is passed to the ekf.
4.7 Reference Barometer 45

4.7.1 Sensor Model


The measured pressure data from the barometers are converted to a relative alti-
tude according to (3.26), which is restated here
 ! LR 
T  Pa − g −1 
zrel = za − zb =   = h(P , P , T ) .
a b b (4.27)
L Pb
 

where Pa is the pressure from the uav’s barometer, Pb is the pressure from the
ugv’s barometer and T is the air temperature. It is assumed that the converted
altitude is under the influence of awgn. The sensor model is

z = zrel = [01×2 1 01×3 ] x + ,  ∼ N (0, Rbaro ) , (4.28)

where Rbaro is the covariance matrix.

4.7.2 Error Model


In order to estimate the variance of the noise in the sensor model, see (4.28), the
noise of the real barometers are investigated. The sensor test is performed as
follows:

• Both barometers are put on the floor for 100 seconds.


• The first barometer is lifted to a height of 1.94 meters and is left there for
100 seconds.
• The first barometer is then placed on the floor again and is left there for
another 100 seconds.

Figure 4.21 shows the data from the test. The pressure data is used to compute
a height estimate which is presented in Figure 4.22. Table 4.4 presents statistical
data from the relative height estimate. From the data presented in the table, Rbaro
is set to 0.2 m2 .
For simulation purposes it is also interesting to know the variance of the
barometer. The variance of the first 1000 data points from the pressure data
gives a variance of 11.56 and 13.89 for barometer 1 and 2 respectively.

Table 4.4: Statistics of the relative barometer height estimates.

Interval Mean Altitude (m) Mean Deviation (m) Variance (m2 )


1 -0.0934 0.0934 0.2105
2 1.6894 0.2506 0.2371
3 -0.0199 0.0199 0.1891
46 4 Estimation System

10 5
1.0011
Barometer 1
Barometer 2

1.001

1.0009
Pressure (Pa)

1.0008

1.0007

1.0006

1.0005
0 50 100 150 200 250 300
Time (s)

Figure 4.21: Pressure data from the two barometers. Red data correspond to
the barometer on the ground and blue data correspond the barometer that is
lifted after 100 seconds.

4
Data
Mean

2
Height (m)

-1

-2
0 50 100 150 200 250 300
Time (s)

Figure 4.22: Relative height estimates computed from the pressure data in
figure 4.21. The red lines indicate the mean height for each 100 s time inter-
val.
Control System
5
This chapter describes the subsystem that is responsible for computing appropri-
ate control outputs for the uav to solve the landing task. The subsystem is named
control system.

5.1 Control System Description


The overall control goal of the control system is to land the uav on top of the
ugv. In terms of quantities this is the same as saying that the position of the
uav should be equal to the position of the ugv or equivalently that the relative
distance between the vehicles is zero. Let pUAV ∈ R3 define the position of the
uav in the {W}-frame and pUGV ∈ R3 define the position of the ugv in the {W}-
frame. Then let the relative position vector between the vehicles be defined as
prel = pUAV − pUGV . (5.1)
The 2-norm of the relative position between the vehicles is a natural choice of
error signal for the control system. Let e represent the error signal, i.e.
e = kprel k2 . (5.2)
The underlying control goal is then to minimise e.
Since the landing manoeuvre is assumed to take place in an open, relatively
flat environment the control system does not need to account for any obstacles
except the ground and the ugv. This means that the uav can move in any direc-
tion as long as it is above the ground and the ugv. Hence, no obstacle avoidance
algorithm is implemented in this work. However, to ensure that the uav does
not descend too early and collides with the ugv when approaching it, a reference
offset is introduced. The error signal is therefore extended as
e = kpd − prel k2 , (5.3)

47
48 5 Control System

where pd = [0, 0, hd ]T and hd is an altitude offset which is piecewise constant.


The altitude offset hd is set to 0 when the uav can descend safely onto the ugv.
As mentioned in Section 2.3.2 the uav runs the open source autopilot soft-
ware ArduCopter. The autopilot accepts attitude targets and a climbing rate9
which are translated into low level control actions, i.e. the rotational speed of
each motor. These quantities are controlled via feedback from the sensors at-
tached to the flight control unit.
The attitude target is accepted as a normalised quaternion vector q that de-
scribes the orientation of the {UAV}-frame relative to the {W}-frame. When the
{UAV}-frame is aligned with the {W}-frame the quaternion vector q = [0, 0, 0, 1]T .
The vector q can be transformed into the Euler angles roll (φ), pitch (θ) and yaw
(ψ) which have a more direct interpretation. As described in Section 3.3 a roll
angle and a pitch angle cause movements in the y and x-direction respectively
in the {UAV}-frame. If the target yaw angle is zero, a pitch and a roll angle in
the {UAV}-frame correspond to the same roll and pitch angles in the {W}-frame.
Hence, with a yaw angle of zero the movements in the {UAV}-frame equal the
movements in the {W}-frame. Additionally, the yaw angle does not have an im-
pact on the control or estimation system. Therefore, the yaw angle is set to zero
in the control system.
The climbing rate cr describes the vertical movements of the uav and should
be in the interval [0, 1]. The values of cr have the following interpretation



 cr < 0.5 : Descending, maximum descent rate when cr = 0

c r = 0.5 : Hovering




c > 0.5 : Ascending, maximum ascent rate when c = 1

r r

The maximum ascent and descent velocity is determined by a selectable parame-


ter in the autopilot software.
Hence, the task of the control system is to compute the control commands
(φ, θ, cr ) from the current relative position, prel together with the desired height,
hd and then pass them to the ArduCopter autopilot. Since the autopilot requires a
quaternion vector the Euler angles are converted before they are passed onwards,
see (3.5a)-(3.5d). Figure 5.1 shows an overview of the control system.

Figure 5.1: Overview of the control system.

To summarise, the control system should drive the relative position between
9 When used in the GUIDED_NOGPS flight mode, however, ArduCopter supports a variety of flight
modes, see https://ardupilot.org/copter/docs/flight-modes.html.
5.2 State Machine 49

the vehicles to zero by controlling the attitude and climbing rate of the uav. The
altitude can be directly controlled through changing the climbing rate. The move-
ments in the horizontal plane are indirectly controlled via the attitude targets.
Without the autopilot the uav would lose upwards thrust when tilting in any di-
rection since the total thrust would be split in the horizontal plane as well. How-
ever, the autopilot handles this since it is regulating the vertical velocity. The
entire control system can therefore be divided into a vertical and a horizontal
controller.

5.2 State Machine

The context of the control problem changes with the distance between the vehi-
cles, which motivates the use of different control laws at different distances. A
state machine has therefore been implemented where each state corresponds to
a unique control law. The overall strategy of the state machine is described as
follows:

• The uav is flying at constant altitude and moves towards the ugv at the
maximum allowed velocity. The altitude is high enough to account for in-
accuracies in the height estimation in order to prevent collision with the
ground. When the uav is approaching the ugv it starts to slow down in
order to prevent an overshoot.

• When the uav is in close proximity of the ugv it starts to match the ugv’s
movements and its horizontal position. This requires a faster response time.

• When the uav closely follows the movements of the ugv it starts descend-
ing while keeping the horizontal positions as close as possible.

The above explained actions are each captured in a unique state. The states
are called: APPROACH, FOLLOW and DESCEND. The idea is that the state machine
starts in the APPROACH state and transitions to the other states in the presented
order as the distance between the vehicles gets shorter. A transition occurs if a
condition is met where the governing transition quantity is the relative distance
between the vehicles in the horizontal plane. A state transition can take place
in both directions. To prevent the state machine from switching back and forth
between two states, a deadband is introduced such that a greater distance is re-
quired in order to transition backwards. An overview of the state machine and its
conditions can be seen in Figure 5.2. The following three subsections describes
and motivates each state.
50 5 Control System

Figure 5.2: The states and state transition conditions of state machine. The
variable dhor represents the relative distance in the horizontal plane between
the vehicles, kF→A and kD→F represent the additional distance required to
transition backwards. The distances are given in meters and kF→A = 0.2 and
kD→F = 1.0.

5.2.1 Approach
The APPROACH state is the initial state of the state machine. In this state the uav
flies at a constant altitude of approximately 10 meters10 while moving towards
the ugv in the horizontal plane. The main goal is to close the distance between
the vehicles as fast and efficiently as possible. Since the distance to the target is
longest in this state the estimation of the relative position difference, based on the
uwb measurements, exhibits the largest errors. Also, the distance implies that
the movements of the ugv does not affect the direction of the relative position
vector as much compared to the other states. These two facts entail that the
control strategy in this state should be to guide the uav in a coarse direction
towards the ugv, i.e. it should neglect fast movement variations while aiming
for a future point of collision.
As stated in Section 3.8 guidance laws are control laws designed to guide a
pursuer such that it reaches a target object. A common application for guidance
laws is therefore in missile systems. However, Gautam et al. [38] investigates the
possibility to use guidance laws for uav landing. They compare the performance
of three classical guidance laws: pure pursuit, line-of-sight and proportional nav-
igation. Gautam et al. concluded that the proportional navigation (pn) law is the
most efficient in terms of time and required acceleration. Borowczyk et al. [8]
test the pn strategy in a real world scenario with positive results. This motivates
the use of a pn-controller in the APPROACH state.
As described in Section 3.8 the pn-control law provides an acceleration which
is perpendicular to the los-vector between the vehicles. Hence, the control law
only steers the vehicle. The law assumes that the vehicle is propelled forward
from another source such as a constant acceleration or an initial velocity, which
is the case for a missile. Therefore, a complementary controller is required. How-
ever, compared to a missile the approaching uav poses additional constraints on
the relative velocity. The uav should slow down before reaching the target in
order to not overshoot. Hence, the complementary controller should allow the
uav to fly with the maximum allowed velocity, saturated attitude angles, until
it is in the proximity of the ugv. At that point, it should start to slow down to
prevent an overshoot. The implementation of a pn controller that accounts for
10 Chosen as a safety precaution in consultation with the staff at foi.
5.3 Implementation 51

this is presented in Section 5.3.3.

5.2.2 Follow
The FOLLOW state is used when the uav is closer than 4 meters from the ugv in
the horizontal plane. The goal of the state is to position the uav directly above the
ugv, i.e. a zero relative difference in the horizontal plane. In order to preserve the
matching horizontal position and to track the motions of the ugv it is important
with a fast response time. In this state the desired altitude is lowered to 5 m.
In [39] it is concluded that a pn-controller generates oscillative control actions
at close distances and therefore become inefficient, which is solved by switching
to a pd-controller. Borowczyk et al. [8] build on this result in their approach.
They also state that a pd-controller is easier to tune in order to achieve fast re-
sponse time and accuracy.
This motivates the use of a pd-controller controller in the FOLLOW state. How-
ever, in this work the pd-controller is extended to a pid-controller to mitigate
static errors due to wind or the ugv moving too fast.

5.2.3 Descend
The DESCEND state is the final state in the state-machine. The underlying con-
troller is the same as in the FOLLOW state, i.e. a pid-controller. The difference is
that the desired altitude is set to zero. The uav will therefore start to descend
further towards the ugv.

5.3 Implementation
This section describes how the control laws in the control system are implemented.
The implementation is based on the theory described in sections 3.7–3.8. As men-
tioned in Section 5.1, the control signals allow the control system to be divided
into a vertical and horizontal controller. This is also viable since the error signal
e = kpd − prel k2 is minimised when each of its components are minimised, which
can be done independently. The control system is therefore split into a vertical
and horizontal controller. The control algorithms used in the control system re-
quire the relative position and velocity of the uav with respect to the ugv. The
state vector x is therefore " #
p
x = rel ∈ R6 , (5.4)
ṗrel
where ṗrel denotes the time derivative of prel . The overall structure of the control
system is presented in Figure 5.3, which is an extension of the Control System
block in Figure 5.1.
The vertical controller takes the vertical position and velocity (z, ż) as well
as the desired height hd as inputs and outputs a climbing rate cr with the goal of
reaching hd . The implementation of the vertical controller is described in Section
5.3.1.
52 5 Control System

Figure 5.3: Overall structure of the implementation of the uav control sys-
tem. Inputs are the state vector x and the desired height hd . The control
system outputs control commands in the form of attitude angles (φ, θ, ψ)
and a climbing rate cr .

The horizontal controller block has the goal of steering the uav towards the
ugv to attain a relative horizontal distance of zero. The inputs of the controller
are x, ẋ, y and ẏ. The controller then outputs the attitude target (φ, θ). As stated
earlier ψ is set to 0 throughout the entire landing task.11 The horizontal con-
troller uses two different control structures: a proportional-integral-derivative
(pid) structure (FOLLOW and DESCEND) and a proportional navigation (pn) struc-
ture (APPROACH). It is the task of the state machine to decide which of the two
control structures to use.
The control signals that are passed to the autopilot software must be within
certain boundaries. The attitude angles φ and θ must be within the interval
[− π2 , π2 ] and the climbing rate cr ∈ [0, 1]. However, further restrictions of the
steering commands are used as safety precautions to make sure that the uav
cannot accelerate too aggressively during flight. The restrictions are

φ ∈ [−αmax , αmax ], θ ∈ [−αmax , αmax ], cr ∈ [cr,min , 1] , (5.5)

where αmax = 0.3 rad and cr,min = 0.3. For convenience, the saturation function
fsat is defined as



 xmax if x ≥ xmax

fsat (x, xmin , xmax ) B  x if xmin < x < xmax . (5.6)



xmin if x ≤ xmin

11 The choice of ψ = 0 is for simplicity’s sake and is not a restriction of the system. To fly with an
h iT
arbitrary yaw angle ψ the attitude vector φ θ is simply rotated ψ radian before being sent to
ArduCopter.
5.3 Implementation 53

5.3.1 Vertical Controller


This subsection aims to present how the vertical controller is implemented. As
stated earlier, the goal of the vertical controller is to output a climbing rate cr ∈
[cr,min , 1] that minimises the error term (hd − z)2 , i.e. make sure that the uav flies
hd meters above the ugv. The climbing rate cr is controlled via feedback using
a pd-control structure, see Section 3.7. The required inputs are therefore z and
ż. The vertical control problem is the same throughout the entire landing task.
Hence, the same vertical controller can be used in all of the state machine’s states.
The vertical controller is summarised in Figure 5.4.

Figure 5.4: Overall structure of the implementation of the vertical controller.


Inputs are (z, ż) and the desired height hd . The vertical controller outputs a
saturated climbing rate cr .

The error signal ez used by the pd-controller is chosen as ez = hd − z. With


only a proportional part in the controller this choice of error signal would output
a positive cr if the uav is below the desired altitude, i.e. ascend. And vice versa if
the uav is above the desired altitude. Thus, the choice of error signal as ex = hd −z
minimises (hd − z)2 . The output from the pd controller, uz , is saturated, returning
the outgoing climbing rate:

cr = fsat (uz , cr,min , 1) . (5.7)

5.3.2 Horizontal PID Controller


This subsection aims to describe how the horizontal pid controller is implemented.
The control goal of driving the relative horizontal distance to zero is equivalent
to minimising the relative distance in the x- and y-axis independently. Hence,
two decoupled pid-controllers can be used, each steering the uav in the x- and y-
direction respectively. An overview of the control structure is presented in Figure
5.5.
The controller has the goal of minimising x2 and y 2 through changes in the
attitude angles (φ, θ). To avoid repetition when explaining the pid-controllers,
consider the case of the controller in the x-direction. According to Section 3.3,
there is a relation between attitude angles and acceleration. Therefore, if the pid-
controller generates an acceleration, it can subsequently be converted to an atti-
tude angle. Suppose the error signal ex for the pid-controller in the x-direction is
54 5 Control System

Figure 5.5: Overall structure of the implementation of the horizontal pid


controller. Inputs are the horizontal position and velocity and the output is
the attitude angles (φ, θ, ψ).

chosen as ex = −x. With only a proportional term in the controller this error sig-
nal would result in a desired negative acceleration when the relative x-distance
is positive, i.e. the uav accelerates towards the ugv. Vice versa when the relative
x-distance is negative. Thus, the choice of error signal as ex = −x minimises x2 .
The pid controllers return the horizontal accelerations (ax , ay ).
Since the pid controller is implemented in discrete time, the integral in (3.24)
must be approximated. A common approximation is derived using the Euler
method, and gives
Zt k
X
ex (τ)dτ = Ts ex,j , (5.8)
0 j=0

where k is the sample at time t and Ts is the period time of the controller. How-
ever, when this approximation is coupled with control signal saturation it can
lead to a integrator windup [28]. To avoid this phenomenon, the error sum is
saturated and multiplied with a decay factor df < 1. The saturation prevents
the I-part from being too large and the decay factor makes sure that older errors
do not have as much impact as newer ones. The error sum at sample k, Sx,k , is
calculated as
Sx,k = df · fsat (Sx,k−1 + ex,k , −Smax , Smax ) , (5.9)

where Smax is the saturation limit and Sx,0 = 0. The I-part at sample k, Ix,k , is
calculated as
Ix,k = Ki · Ts · Sx,k , (5.10)

where Ki is the integral gain.


When converting the horizontal acceleration signals (ax , ay ) to the attitude an-
gles (φ, θ), the model of uav flight described in Section 3.3 is simplified. Firstly,
5.3 Implementation 55

the yaw angle ψ is set to zero. Inserting ψ = 0 in (3.8) gives

ẍ  0   cos φ sin θ  T


     
ÿ   0   − sin φ 
  =   +   . (5.11)
m
z̈ −g cos φ cos θ
    

The model in (5.11) can be simplified further.


As described in Section 5.3, the maximum allowed attitude angle in the con-
trol system is ± 0.3 rad (≈ ±17◦ ). The attitude angles of the uav will therefore
be relatively small, and the small-angle approximation (sin v ≈ v, cos v ≈ 1,
tan v ≈ v) is applicable. Applying the small-angle approximation to (5.11) re-
sults in
0 
ẍ  θ · T 
  
ÿ  =  −φ · T 0  , (5.12)
   
z̈ −g + T 0
where T 0 = T /m. Equation (5.12) describes that a change in the pitch angle θ,
leads to a proportional change of acceleration in the x-direction and vice versa
for −φ and y. Therefore, the control signals are mapped to attitude angles as
follows:
φ = −ay
(5.13)
θ = ax
However, the outputted attitude angles must be saturated to [−αmax , αmax ], re-
sulting in
φ = fsat (−ay , −αmax , αmax )
(5.14)
θ = fsat (ax , −αmax , αmax ) .

5.3.3 Proportional navigation Control


As stated in section 5.2.1 the pn control law only provides an acceleration vector
perpendicular to the los-vector between the vehicles. To address this, Borowczyk
et al. complement their pn controller with a pd controller that outputs an acceler-
ation parallel with the los-vector [8]. The same control structure is used in this
work but with a modification in the pd controller. The perpendicular accelera-
tion a⊥ ∈ R2 and a parallel acceleration ak ∈ R2 are combined to an acceleration
axy before converted to the attitude angles (φ,θ), which are the final outputs of
the horizontal pn controller. The overall structure of the horizontal pn controller
is presented in Figure 5.6.
The pd-structure that computes ak is based on the theory described in Section
3.7. Let pxy = x1:2 and vxy = x4:5 , i.e. the relative position and velocity between
the uav and the ugv in the horizontal plane. Then let ep = −pxy be the posi-
tion error. Since vxy is not always in the same direction as the los vector, vxy is
projected onto the los vector as follows
pxy · vxy
vklos = pxy . (5.15)
pxy · pxy
56 5 Control System

Figure 5.6: Overall structure of the implementation of the horizontal pn-


controller. Inputs are the horizontal position and velocity and the output is
the attitude angles (φ, θ, ψ).

The velocity error ev is then defined as


vklos
ev = kapproach · − vklos , (5.16)
vklos 2

where kapproach is positive parameter that describes a desired approaching veloc-


ity. The parallel acceleration ak is finally computed as

ak = K p e p + K d e v , (5.17)

where Kp and Kd are positive tuneable parameters. The parameters are chosen
such that the proportional term dominates when the uav is far away from the
ugv, to make sure that the uav approaches the ugv with the maximum allowed
velocity. When the uav approaches the ugv the proportional error will shrink,
and the relative velocity will start to dominate and make sure that the uav slows
down to prevent an overshoot.
In order to compute a⊥ , (3.25) is used as follows
p p×v
a0⊥ = λ|v| ×Ω, Ω= , (5.18)
|p| p·p
h iT h iT
where p = x1:2T T
0 and v = x4:5 0 . The third component is set to zero
since an acceleration in the horizontal plane is desired. The sign of the equa-
tion has changed since x1:2 = {Pursuer position} − {Target position} and x4:5 =
{Pursuer velocity} − {Target velocity}, which is opposite from the theory. The re-
quested parallel acceleration a⊥ is the first two components of a0⊥ .
From experiments it has been noted that the length of ak is much larger than
a⊥ . Changing the value of the Kp -parameter does not solve this since ak is pro-
portional to the relative distance, which varies a lot during the APPROACH phase.
This becomes an issue when the accelerations are converted to attitude angles
5.4 Control System Evaluations 57

according to (5.14). The saturation performed in (5.14) takes place in the x- and
y-direction. If ak and a⊥ are not aligned with those axes, the much larger ak -
vector would dominate and surpass the saturation limit, such that a⊥ have no
effect. To solve this, the saturation is performed on the individual accelerations
before they are combined. The saturation is implemented as follows
ak
ak , sat =fsat (|ak |, −αmax , αmax ) · (5.19a)
|ak |
a
a⊥ , sat =fsat (|a⊥ |, −αmax , αmax ) · ⊥ (5.19b)
|a⊥ |

Thereafter, the accelerations are combined in the following manner


a⊥ , sat + ak , sat
axy = αmax · , (5.20)
max(ka⊥ , sat + ak , sat k∞ , αmax )

which guarantees that each element of axy is within the interval [−αmax , αmax ].
Finally, the attitude angles can be computed using (5.13) as follows

φ = −axy,2 , θ = axy,1 . (5.21)

5.4 Control System Evaluations


The control system’s performance is evaluated in the simulation environment de-
scribed in Section 2.4. During this test the control system use true position and
velocity data when computing the control commands. A simulated test is per-
formed as follows:
h i
• The uav is spawned at the position 0 0 0 . The ugv is randomly placed
on a circle with radius 50 m and the ugv’s heading is randomly selected.

• When the flight controller is initialised, which takes about 10 s, the uav
starts to ascend and the ekf is initialised. Simultaneously, the ugv starts
moving in the horizontal plane with a velocity of 4 m/s according to the
pattern described in Appendix C.3.
• When the uav reaches an altitude of 9 m it starts using the outputs of the
control system in order to approach and land on top of the ugv.
During the entire simulation the uav is under the influence of an external wind
force, see Appendix C.1.
The simulation is repeated 100 times. Figure 5.7 shows the ugv’s movements
during the 100 runs. The shape of the trajectories is determined by the ugv’s
starting position, its heading at the start and how the ugv is manoeuvring during
the simulation. Additionally, the length of the trajectory is determined by the
time it takes for the uav to land, which is influenced by the ugv’s movements
and the direction and amplitude of the wind. Figure 5.8 illustrates the uav’s
flight path together with the movements of the ugv from the previous figure.
58 5 Control System

UGV's Trajectories UGV's Start Positions UGV's End Positions

300

200

100

0
y (m)

-100

-200

-300

-400
-300 -200 -100 0 100 200 300
x (m)

Figure 5.7: Movement trajectories for UGV during 100 simulation runs. The
red crosses indicates the starting positions of the ugv, the black line shows
the trajectories and the blue stars represent the position of the ugv when the
uav has landed on top of it.
5.4 Control System Evaluations 59

Figure 5.8: The ugv’s and the uav’s movement paths during the 100 runs.
The uav’s flight paths are represented with the purple lines.

The uav lands on top of the ugv in each simulation. The landing positions
of the uav on the ugv platform can be seen in Figure 5.9 together with a 95
% confidence interval. Additionally, Figure 5.9 presents 100 landing positions
when the Ki constant is set to zero for both horizontal pid-controllers, i.e. no
I-part during landing phase. Without an I-part the landing positions are shifted
backwards, which makes sense since the ugv is moving in the opposite direction.
Additionally, the landing positions are more spread out when no I-part is being
used. In this idealised case, the landing accuracy is high and 95 % of the landings
occur less than 20 cm from the centre of the landing pad.
60 5 Control System

Figure 5.9: uav landing positions on the ugv, with and without an I-part
in the horizontal pid-controller. The black line indicates the border of the
landing platform on the ugv, the dashed black line illustrates the region
where the uav can land safely in order to have all four legs on the platform
and the black arrow shows the direction the ugv is moving in. The blue
circles show the landing positions when an I-part is used, and the red crosses
show the landing position when no I-part is used. Two 95 % confidence
ellipses are presented alongside the landing positions.
5.4 Control System Evaluations 61

The time distribution for the simulations are studied, in particular the elapsed
time for all of the states, the APPROACH state as well as the FOLLOW and DESCEND
state. The median, min and max time of each scenario is presented in Table
5.1, which is based on the distributions presented in Appendix D.1. The varia-
tions of elapsed time during an entire mission seem to be caused by variations in
the APPROACH phase. Further investigation of these variations conclude that the
main cause is the environmental circumstances of a simulation, i.e. the path the
ugv traverses as well as the direction of the wind.
The fluctuations in elapsed time in the FOLLOW and DESCEND states is signif-
icantly smaller when compared to the APPROACH state. The extremes of these
states were studied further, and it was confirmed that the additional time was
caused by so-called retakes. If the uav is moving too far away from the center of
the ugv during the DESCEND state, it results in a transition back to the FOLLOW
state, or a retake. Retakes were observed in 12 of the 100 simulations. The re-
takes are not caused by estimation errors since the control system uses true posi-
tion and velocity data. Factors causing a retake is instead sudden change of the
ugv’s movements or shift in amplitude of the wind.

Table 5.1: Time distribution from 100 simulations. Median, min and max
of the elapsed time in all of the states, the APPROACH state as well as the
FOLLOW and DESCEND state.

States Median (s) Min (s) Max (s)


All 35.0 22.2 87.9
APPROACH 15.9 2.7 68.8
FOLLOW and DESCEND 19.0 17.4 31.0

In order to further illustrate how the control system operates, data from one
of the 100 simulations is presented. Figure 5.10 shows the uav’s and the ugv’s
horizontal movements and Figure 5.11 shows the vehicles’ respective vertical po-
sitions during one simulation. In Figure 5.10 it can be seen that the uav is drift-
ing in the wind direction while it is ascending. When the uav reaches an altitude
of 9 m it starts using the outputs from the control system and therefore starts
moving towards the ugv. When the uav transitions from the APPROACH state
to the FOLLOW state the desired altitude changes from 10 m to 5 m, which Fig-
ure 5.11 shows. The uav transitions to the DESCEND state before it reaches an
altitude of 5 m and will therefore continue to descend until it has landed on the
ugv.
To get an understanding of how well the uav is able to track the ugv’s ve-
locity in the horizontal plane the velocities in the x-direction are presented in
Figure 5.12. It can be seen that the vehicles’ velocities are different during the
APPROACH state, but as the uav transition to the FOLLOW and DESCEND states
the uav tries to match the ugv’s velocity. The uav’s velocity exhibits a slightly
oscillative behaviour during the FOLLOW and DESCEND states. An explanation
for this is that the current implementation of the pid-controller only reacts on
the current states and does not predict future state values.
62 5 Control System

70

60

50

40
y (m)

30

20

UGV's Horizontal Trajectory


10 UAV's Horizontal Trajectory
UAV Reached Start Altitude
UGV's Start Position
UGV's End Positions
Wind Direction
0
-40 -20 0 20 40 60 80 100 120 140
x (m)

Figure 5.10: The uav’s and the ugv’s horizontal movements during one sim-
ulation. The blue arrow shows the direction of the wind and the black cross
the position where the uav reaches an altitude of 9 m.

Figure 5.13 shows the attitude angles, roll and pitch, from the control system
during the simulation. When the control system transitions from the APPROACH
to the FOLLOW state, a steep change in control signals is observed. These changes
are caused by the switch between controllers in the state transition. This be-
haviour has been reduced in the control system tuning but is evidently still present.
Furthermore, the control signals show oscillative behaviour in the DESCEND state,
see Figure 5.13. The cause of the oscillations is that the uav has the goal of match-
ing its position and velocity with the ugv’s. With the ugv varying direction reg-
ularly as well as changes in the wind amplitude, the control signal has difficulty
of getting to a steady state.
5.4 Control System Evaluations 63

10 UGV's Vertical Position


UAV's Vertical Position
8

6
z (m)

2
Ascend Approach Follow Descend
0
5 10 15 20 25 30 35 40
Time (s) ,
Figure 5.11: The uav’s and the ugv’s vertical position during one simula-
tion. The leftmost dotted line indicates when the uav reaches an altitude of
9 m. The other two dotted lines indicate transition in the state machine.

6
UGV's x-velocity
UAV's x-velocity
4
v x (m/s)

-2 Ascend Approach Follow Descend

5 10 15 20 25 30 35 40
Time (s)

Figure 5.12: The uav and ugv velocity in the x-direction during one simu-
lation.

0.3
Roll angle (rad)
Pitch angle (rad)
0.2

0.1

-0.1

-0.2
Ascend Approach Follow Descend
5 10 15 20 25 30 35 40
Time (s)

Figure 5.13: Attitude output (roll and pitch angles) from the control system
during one simulation.
Landing System Evaluation
6
This chapter presents the results from the evaluation of the landing system. In
Section 6.1 the results from simulated landings using both the estimation and
the control system are presented. An experimental validation of the estimation
system is presented in Section 6.2.

6.1 Landing System Simulation


As presented in Section 5.4, the control system was capable of executing 100 con-
secutive landings when using true state data. The next step is to combine the
control system and the estimation system and evaluate their combined perfor-
mances, i.e. let the control system compute control commands using estimated
position and velocity data. The analysis is carried out using the same simulation
environment with identical preliminaries as in Section 5.4, with the exception
that the control system uses estimated states.
Figure 6.1 illustrates the trajectories of both the uav and ugv. The flight
paths in Figure 6.1 do not show any new behaviour compared to the paths in
Figure 5.8, the uav is able to land on the uav during every simulation in this
case as well. The landing positions of the uav can be seen together with a 95 %
confidence interval in Figure 6.2. Compared to the landing positions in Figure
5.9, the landing positions are somewhat more spread out, but still within the uav
footprint margin. The uav lands within approximately 3 dm from the centre of
the landing pad in all simulations.

65
66 6 Landing System Evaluation

Figure 6.1: uav and ugv trajectories over 100 simulations.


6.1 Landing System Simulation 67

Figure 6.2: Landing positions of the uav with respect to the {UGV}-frame.
The positions of 100 runs as well as 95 % confidence interval of the landing
positions are shown. The black dashed line represents the footprint margin
of the uav, i.e. the area in which the uav can land safely.
68 6 Landing System Evaluation

The time distribution of the 100 runs can be seen in Table 6.1. The table is
generated from the data presented in Appendix D.1. Compared to Table 5.1 it
can be seen that the median time is slightly larger when using estimated data,
which is expected. The median time in the APPROACH state is similar between
the runs. However, the max time when using true data is about 10 s larger com-
pared to when using estimated data. This indicates that the extreme cases in
the APPROACH state is mostly determined by the environmental circumstances
rather than the estimation quality. On the contrary, the extreme cases in the
FOLLOW and the DESCEND states suggest that the governing factor is the estima-
tion quality. The median time is not substantially increased; however, the max
time is approximately 50 % larger using estimated data. The uav exhibits 21 re-
takes when using estimated data compared to the 12 retakes with true data. The
additional retakes together with the larger max time indicate a more uncertain
behaviour during the FOLLOW and DESCEND when estimated data is being used.

Table 6.1: Time distribution from 100 simulations. Median, min and max
of the elapsed time in all of the states, the APPROACH state as well as the
FOLLOW and DESCEND state.

States Median (s) Min (s) Max (s)


All 36.8 22.5 95.1
APPROACH 16.4 0.7 54.6
FOLLOW and DESCEND 19.9 17.9 46.4

The performance of the estimation system is measured using the root mean
square error (rmse) of the estimated states. The estimated states are {x, y, z, vx ,
vy , vz }. Since the direction of the x- and y-axis are determined by the underlying
reference system, which is decoupled from the vehicles orientation, a horizontal
distance and velocitypis used instead in the evaluations. The horizontal distance
r is defined as r = x2 + y 2 and the horizontal velocity vr is defined as vr =
q
vx2 + vy2 .
During a simulation, the relative distance between the vehicles varies. In the
APPROACH state the relative horizontal distance ranges from 4 to 100 m while
in the FOLLOW state and the DESCEND state the horizontal distance is below 4
m. The relative distance affects the estimation quality and the rmse is therefore
calculated separately for the APPROACH state and landing phase.
Table 6.2 and Table 6.3 presents the mean, min and max rmse for the 100 runs
for position and velocity estimates respectively. Additionally, the tables present
data from simulations where the camera is excluded. The distributions of the
rmse in simulation is presented further in Appendix D.2.
First, it can be seen that without the camera, the rmse for the horizontal
and vertical position estimate is slightly lower in the APPROACH state but higher
in FOLLOW and DESCEND states compared to when the camera is used. In the
APPROACH state the uav flies at the maximum allowed altitude, 10 m, and in the
FOLLOW and DESCEND states the uav immediately starts to descend to an alti-
6.1 Landing System Simulation 69

tude of 5 m and 0 m respectively. From Section 4.5.5 it is stated that the bias in
the camera system increases with the altitude and that the bias is not accounted
for in the estimate. This suggests that the bias in the camera system’s estimate
will have most impact in the APPROACH state and decrease in the FOLLOW and
DESCEND states, which the results reflect. Furthermore, the camera system only
provides position estimates in the very end of the APPROACH phase when the ve-
hicles are close to each other. The velocity estimate is not affected by bias in the
position estimate, which also can be seen in Table 6.3.
Secondly, the horizontal position and velocity have a much larger mean rmse
compared to the vertical quantities in the APPROACH state. The explanation is
that the system mainly uses measurements from the uwb sensor network when
estimating the horizontal quantities, while the reference barometer is used for the
vertical estimates. From Section 4.4.3 it is concluded that the uwb sensor network
exhibits a larger position error at longer distances. Comparing the results from
the APPROACH state with the results from the FOLLOW- and the DESCEND states
it can be seen that the estimation errors are significantly lower for the latter. The
accuracy of the uwb system improves during the FOLLOW- and DESCEND-phase
due to a more favourable geometry. Furthermore, the camera-based positioning
system can also provide measurements in these phases.

Table 6.2: Mean, min and max rmse for horizontal and vertical position
during 100 simulations. The rmse is divided into the APPROACH state and
the FOLLOW and DESCEND states. Additionally, rmse without the camera
system is presented.

Horizontal Position Vertical Position


State Camera
Mean Min Max Mean Min Max
Yes 4.54 2.8 7.29 0.11 0.06 0.24
APPROACH
No 4.46 2.56 6.69 0.12 0.07 0.27
FOLLOW and Yes 0.11 0.06 0.19 0.05 0.03 0.1
DESCEND No 0.18 0.08 0.4 0.06 0.03 0.19

Table 6.3: Mean, min and max rmse for horizontal and vertical velocity
during 100 simulations. The rmse is divided into the APPROACH state and
the FOLLOW and DESCEND states. Additionally, rmse without the camera
system is presented.

Horizontal Velocity Vertical Velocity


State Camera
Mean Min Max Mean Min Max
Yes 0.88 0.38 1.56 0.12 0.06 0.25
APPROACH
No 0.92 0.44 2.66 0.13 0.06 0.25
FOLLOW and Yes 0.17 0.08 0.54 0.08 0.05 0.23
DESCEND No 0.20 0.1 0.86 0.1 0.05 0.46

To illustrate the impact of the camera system during the landing phase, the
landing positions of the uav without a camera are presented in Figure 6.3. The
70 6 Landing System Evaluation

landing positions are more spread out compared to Figure 6.2. Also, a few of the
landing positions are outside the safety margins. This does not necessarily mean
that one of the uav’s legs is outside the platform, but there is a risk. Additionally,
the uav performs retakes in 65 of the 100 simulations. This indicates that the
landing system is less robust during landing when omitting measurements from
the camera system.

Figure 6.3: Landing positions of the uav with respect to the {UGV}-frame
without the camera system. The positions of 100 runs as well as 95 % con-
fidence interval of the landing position are shown. The black dashed line
represents the footprint margin of the uav, i.e. the area in which the uav
can land safely.
6.2 Estimation System Validation 71

6.2 Estimation System Validation


Since the landing system has proven to work in a realistic simulation environ-
ment, the next step is to validate it experimentally. To make sure that every
subsystem is working as intended, the validation is performed incrementally. A
part of the validation process, which was executed in this thesis, is to validate the
estimation system with physical sensors. The results from the test are presented
in this section.
The validation test was performed in the Visionen laboratory environment at
Linköping University. Visionen is equipped with a camera positioning system
that can provide a position and velocity estimate with millimetre precision using
Qualisys camera tracking system12 . The positioning environment is in an arena
with physical dimensions similar to the volume in which the uav executes its
landing phase. These experiments are therefore used to validate the accuracy
of the estimation system during the landing phase, with the estimates from the
Qualisys camera system used as ground truth.
The ugv was stationary during the test and lacked a communication link with
the uav. Hence, the attitude, accelerometer or barometer data from the imu on
the ugv could not be utilised. The attitude of the ugv was instead taken from
the Qualisys camera system at the start of the simulation, and the ugv’s accelera-
tions were set to zero. Due to the time variance of the air pressure readings from
the barometer, caused by naturally varying air pressure and sensor bias drift, the
height estimate from the single barometer deviated more than expected. There-
fore, the height measurement was instead simulated using height data from Qual-
isys. In accordance with the reference barometer error model described in Section
4.7.2, awgn with variance 0.2 m2 was added to the height measurements. Addi-
tionally, a bias of 0.2 was added which is a conservative mean of the deviation in
Table 4.4.
The test setup was as follows:
• The ugv was placed in the arena. The positions of the ugv mounted uwb-
anchors were noted.
• The Jetson on the uav was powered on and started recording sensor data
from the camera, imu, uwb-tag and Pixhawk. The ground truth pose and
velocity for each vehicle was recorded.
• The uav’s engines were powered on.
After the setup, the uav took off and the pilot started flying the uav around in
the room, while the Jetson continued to record data. Figure 6.4 presents the uav
and ugv during the test.
The flight test lasted for 300 s in total and during that time, a number of man-
ual landing manoeuvres were performed to provide data similar to the FOLLOW
and DESCEND states in simulation but without any actual uav touch-downs. Three
of these landing manoeuvres were extracted and further analysed.
12 For more information on Qualisys, see: https://www.qualisys.com/software/qualisys-track-
manager/
72 6 Landing System Evaluation

Figure 6.4: The uav and ugv during the Visionen test.

The recorded data from the manoeuvres, as well as the artificial sensor mea-
surements from the ugv, are post-processed in the estimation system. The post-
processing is performed using two sensor setups: all sensors used in the esti-
mation system and all sensors except the camera system used in the estimation
system. The second sensor setup is motivated by the goal of finding out if a land-
ing can be achieved covertly and independent of the time of day. Such a landing
could not rely on the camera system in its current state since it requires visibility
of the tag which is not possible at night. Solutions such as lighting up the landing
pad would not meet the covert landing criteria. From both setups, the positions
and velocities in each manoeuvre are estimated. The uav trajectory (estimated
as well as ground truth) from the second flight manoeuvre is presented in Figure
6.5 to illustrate the performance of the estimation.
To further investigate the estimation accuracy for both sensor setups, the
rmse, maximum absolute error and a 95th percentile of the absolute error for the
position and velocity estimate are determined, see Table 6.4 and Table 6.5. The
results show that the rmse for the horizontal position estimate is almost doubled
when the camera system is excluded. The results are also compared with the re-
6.2 Estimation System Validation 73

Figure 6.5: uav trajectory (ground truth and estimation) from the second
flight manoeuvre in the Visionen test. The landing pad of the ugv with four
beams pointing upwards is shown as well.

sults in Table 6.2 and Table 6.3 for the FOLLOW and DESCEND states. The individ-
ual rmse from the experimental tests are less than or equal to the corresponding
maximum rmse in simulation. This suggests that there exist simulations where
the estimation quality is similar to the experimental tests.
74 6 Landing System Evaluation

The maximum absolute error in Table 6.4 and 6.5 is around 2 to 3 times larger
than the rmse. The greatest discrepancy between rmse and the maximum is seen
in landing manoeuvre 1 and 3. In these manoeuvres, the large error was caused
by the camera losing sight of the tag. In the case of flight manoeuvre 2, the cam-
era has sight of the tag during the entirety of the manoeuvre, see Figure 6.6. This
explains the significantly lower maximum error when the camera is used. Com-
paring the estimation error without the camera system, the maximum absolute
error is similar during all three manoeuvres. The maximum error without the
camera system is also similar to the maximum error during flight 1 and 3, which
confirms that the large error is caused by the camera system losing track of the
tag.
The estimation without the camera system was investigated further. It was
concluded that the large errors can occur during the entire manoeuvre. Hence,
with only a uwb sensor network and an imu for estimation, a relatively high
horizontal estimation error can appear at close distances to the ugv (∼1-2 m).

Table 6.4: rmse, maximum absolute error as well as the 95th percentile of
the absolute error (in m) for horizontal and vertical position for three differ-
ent landing manoeuvres during the Visionen test. Results are shown with
and without the camera system.

Horizontal Position Vertical Position


Landing Manoeuvre Camera RMSE Max 95th RMSE Max 95th
1 Yes 0.19 0.54 0.48 0.10 0.19 0.15
2 Yes 0.09 0.19 0.16 0.08 0.18 0.14
3 Yes 0.18 0.65 0.41 0.07 0.29 0.13
1 No 0.21 0.54 0.45 0.10 0.23 0.18
2 No 0.39 0.65 0.61 0.12 0.21 0.18
3 No 0.29 0.56 0.47 0.07 0.17 0.14

Table 6.5: rmse, maximum absolute error as well as the 95th percentile of
the absolute error (in m/s) for horizontal and vertical velocity for three dif-
ferent landing manoeuvres during the Visionen test. Results are shown with
and without the camera system.

Horizontal Velocity Vertical Velocity


Landing Manoeuvre Camera RMSE Max 95th RMSE Max 95th
1 Yes 0.16 0.41 0.35 0.17 0.4 0.34
2 Yes 0.14 0.54 0.32 0.11 0.38 0.20
3 Yes 0.25 0.73 0.65 0.10 0.27 0.20
1 No 0.19 0.53 0.39 0.18 0.39 0.34
2 No 0.16 0.41 0.32 0.12 0.34 0.23
3 No 0.17 0.44 0.31 0.10 0.39 0.19

The sensor measurements are also analysed to get further knowledge about
6.2 Estimation System Validation 75

the estimation system. When studying the accelerometer data from the test, un-
expected behaviour was found. From the time of engine start, the raw data was
very noisy, which continued throughout the test. When the uav landed and the
engines were cut-off, an oscillation transient could be seen in the data as well. It
was suspected that the oscillations are caused by resonance in the sensor platform,
caused by the vibrations from the engines. To reduce resonance in future tests,
the mounting of the platform could be revised. Another unwanted behaviour
from the accelerometer is that the data contained outliers. With the current im-
plementation, these outliers could not be detected by the nis-test as described in
Section 4.2.
To illustrate the performance of the camera system, its estimated position
during landing manoeuvre 2 is studied, see Figure 6.6. The camera system mostly
uses the small tag for the estimation, since the uav is too close to get the entire
large tag in the fov of its camera. Camera data is acquired during the entire
landing manoeuvre.
To study the performance of the uwb sensor network, the ground truth dis-
tance was calculated for each anchor-tag pair based on the anchor position as well
as the position of the uav and ugv from Qualisys. Figure 6.7 presents the perfor-
mance of anchor-tag pairs 1 and 4 during the complete flight: pair 1 follows the
ground truth nicely, while pair 4 exhibit some serious outliers. This motivates
the use of an outlier rejection algorithm. It has been confirmed that the nis-test,
see Section 4.2, manage to reject this kind of outliers. To study the performance
of all anchor-tag pairs further, the rmse, maximum absolute error as well as the
95th percentile of the absolute error is calculated, see Table 6.6. The measure-
ments from anchor-tag pair 3 had outliers of similar size as with anchor-tag pair
4. Despite these outliers, 95 % of the measurements from all anchor-tag pairs
have errors below 30 cm. These errors are both caused by the performance of the
uwb radios as well as inaccuracy when measuring the sensor placement.
76 6 Landing System Evaluation

Figure 6.6: Position estimation from the camera system in the second flight
manoeuvre from the Visionen test. The landing pad of the ugv with four
beams pointing upwards is shown as well.
6.2 Estimation System Validation 77

6
Measured
Ground Truth
5
Distance (m)

1
0 50 100 150 200 250 300
Time (s)

15
Measured
Ground Truth

10
Distance (m)

0
0 50 100 150 200 250 300
Time (s)

Figure 6.7: uwb sensor performance for anchor-tag pairs 1 and 4 (top and
bottom). Measured distance is compared to ground truth.

Table 6.6: rmse, maximum absolute error as well as the 95th percentile
of the absolute error (in m) of the range measurements from all anchor-tag
pairs.

Anchor-tag
RMSE Max 95th
pair
1 0.15 0.79 0.24
2 0.13 0.62 0.26
3 0.24 7.26 0.29
4 0.63 8.48 0.23
Conclusion and Future Work
7
This chapter presents the conclusions of this thesis, with regards to the problem
formulation presented in the introduction, together with suggested future work.

7.1 Conclusions
A combined estimation and control system capable of landing a uav on a mobile
ugv has been presented. The combined system has been tested and evaluated in a
realistic simulation environment, developed with the aid of Gazebo, ArduCopter
SITL and ros. The individual sensors used in the system have been evaluated sep-
arately. Finally, the estimation system has been validated through experiments.
Initially, the performance of the uwb radios were evaluated with physical sen-
sors. Based on the evaluation, three different placements of the uwb anchors
were analysed with crlb. The anchor placements were decided based on the
results. The accuracy of the camera system was investigated with a simulated
camera, to get an understanding of the bias and variance of its position estimate.
From these evaluations, as well as small-scale experiments indicating the accu-
racy of the accelerometers and barometers, the estimation system was designed.
The implementation of the sensors in the simulation environment were based on
these results.
The control system proposed in this work utilises two control structures: a
pn controller is used to approach the moving ugv and a pid controller is used
to land on top of it. This structure is based on [8], however, the implementa-
tion of the pn controller is modified and a pid controller is used instead of a pd
controller. The control system has been evaluated in simulation and has shown
positive results. The system managed to safely land the uav during all the 100
simulations. The feasibility of the control system cannot be confirmed before a
test is conducted with physical hardware. However, since a sitl implementation

79
80 7 Conclusion and Future Work

of the flight controller has been used during simulation, the control system will
most likely perform similarly in such an experiment. Despite the successful land-
ings, it was noted that the control system showed a slight oscillative behaviour
during the landing phase, which results in a less robust system. One probable
factor to this behaviour is that the control system does not account for future
movements of the ugv.
The estimation system has been evaluated during simulated landings as well.
Combined with the control system, landings were achieved in 100 of 100 at-
tempts, which confirms a fully functional landing system. However, when com-
pared to the landing results using true data, the landing positions on the ugv are
more spread out and the uav is more prone to restart its attempts. A uav land-
ing at night is also of interest, therefore landings have been simulated without
the use of the camera system as well. The results show a much more uncertain
behaviour during the landing phase, behaviours that would be unwelcome in a
physical landing.
To validate the results from simulation, the estimation was evaluated exper-
imentally using the hardware platforms. The experiment was conducted in an
environment similar to the landing phase, except with a stationary ugv. The
estimation accuracy during the experiment was comparable to the accuracy in
simulation. This indicates the feasibility of the estimation system with real hard-
ware during the landing phase. The estimation system was also experimentally
evaluated without the use of the camera system. Results show that the accuracy
is significantly lower when the camera system is not used, the estimation error
is doubled to tripled in size. When considering the behaviour in simulation pre-
sented previously as well, we conclude that a physical landing without a camera
is improbable.

7.2 Future Work


The long term intention of the landing system is to deploy it on real hardware.
Further experiments are required to get a better understanding of the validity
of the simulation results. First, the estimation system should be tested at longer
distances resembling the APPROACH-phase. Secondly, the control system should
be implemented and evaluated on the real uav platform.
The performance of the control system during the landing phase could poten-
tially by improved utilising knowledge about the movements of the ugv. Imple-
menting such a feature would most likely reduce the oscillate behaviour during
the landing phase, which in turn would allow for a more robust system. The
movements of the ugv could either be predicted or provided, assuming a com-
munication link between the uav and the ugv. Baca et al. propose a movement
prediction strategy in [9], which has shown to work in a variety of real-world
experiments.
A future goal is to have an estimation system functioning independent of time
of day. Thus, the current version of the estimation system needs to be modified
since an ordinary camera is used. Results show that the system performs signif-
7.2 Future Work 81

icantly worse without a camera and a substitute is most likely needed. Further
research should therefore investigate if an IR-camera can be used instead.
Appendix
A
Hardware Drivers

This appendix describes the hardware drivers used for communication between
the sensors and the Jetson Nano.

A.1 XSens Driver


The official Xsens ros-package xsens_mti_driver13 is used to provide imu data to
the ros network.

A.2 Camera Driver


The ROS-package gscam14 is used on the jetson processor to handle the csi com-
munication with the camera. The package also provides an interface to the ros-
network and publishes the raw camera data to it with a rate of 10 Hz. The param-
eters of the camera model is published alongside. The ros-package image_proc15
is used to rectify the raw image data using the camera model.

A.3 Decawave data extraction


The DWM1001 Decawave UWB modules used in the landing system has a pre-
built firmware library [40]. The firmware library supports a shell mode which
accepts a list of commands. Via this shell mode it is possible to extract range data
from Decawave modules acting as tags in a sensor network. The commands can
13 Package at http://wiki.ros.org/xsens_mti_driver.
14 Package at http://wiki.ros.org/gscam.
15 Package at http://wiki.ros.org/image_proc

85
86 A Hardware Drivers

be sent to a Decawave module through serial communication. In this case UART


is used with a baud rate of 115200. There are a number of shell-commands but
the one of interest in this case is the lec-command. This command enables a
mode which starts sending range data over UART with a CSV format. The data is
provided with a frequency of 10 Hz and is given in the format:
[DIST, # Anchors, ID Anchor 1, X, Y, Z, Range, ID Anchor 2, ...]
Simulated Sensors
B
This appendix describes how the sensors were simulated.

B.1 Attitude
The standard imu-sensor in Gazebo is used to obtain attitude data. The data
is noise-free. In order to get a more realistic simulation, noise is added to the
attitude data before it is used. The noise is applied as an additional rotational
matrix, which is called Rnoise . The Rnoise matrix is a rotational matrix generated
from randomly selected Euler angles. The Euler angles: roll, pitch and yaw are
drawn from a Gaussian distribution with zero mean and 1◦ variance. On top
of the randomly drawn angles a bias is added. The bias is randomly chosen as
[0.7◦ , −0.5◦ , 0.6◦ ]16 , where each element correspond to an Euler angle. The atti-
tude data is used to compute a rotation matrix W Rb , which then is contaminated
with noise as follows

W
Rb,noise = Rnoise W Rb , (B.1)
where W Rb,noise is the rotation matrix used in simulation.

B.2 Ultra-Wide Band Sensor Network


The uwb sensor network is simulated in Gazebo and used during the simulated
landing task. The network consists of four anchors placed on the ugv according
to the configuration chosen in Section 4.4.3. One tag is placed at the centre of
16 Based on the RMS values of a XSens MTi-600 IMU, see:
https://www.xsens.com/hubfs/Downloads/Leaflets/MTi%20600-series%20Datasheet.pdf

87
88 B Simulated Sensors

the uav as well. For each anchor-tag pair, the measured distance is calculated as
presented in (4.15), with the noise variance determined in Section 4.4.2.
However, as described in Table 4.1 in Section 4.4.2, the parameters a and b
can vary between sensors. Therefore, to get a more realistic measurement a and
b are modelled as random variables with a uniform distribution U . The scaling
factor a is bounded to the interval [1.0028, 1.0036] and the bias b is bounded to
the interval [0.01, 0.1], i.e.
a ∼ U (1.0028, 1.0036), b ∼ U (0.01, 0.1) (B.2)
The parameters a and b are generated for each anchor-tag pair at the start of a
simulation.

B.3 Camera
Gazebo is capable of simulating a camera sensor17 . The sensor is attached to a
model in Gazebo and outputs a simulated camera stream based on the pose of
the model. The camera parameters calculated in Section 4.5.1 are used with the
simulated camera to allow a more realistic simulation. Pixelwise awgn is also
added to the simulated image.

B.4 Accelerometer
Gazebo has an implementation of an imu-sensor18 . The sensor provides the lin-
ear acceleration of the body it is attached to and the data is represented in a body-
fixed coordinate system. The simulated sensor supports modelling of awgn.

B.5 Reference Barometer


The barometers are simulated in Gazebo and used during the simulated landing
task. The pressure generated by the simulated sensors comes from the inverse of
equation (3.26), which is presented here
" #
L(z − z0 )
P = P0 · 1 + . (B.3)
T
The lower altitude z0 is set to zero. P0 and T are set to 101.325 kPa and 288 K
respectively, which are the ISA standard values for pressure and temperature at
sea level, see Section 3.9. The altitude z is the true altitude of the model the sensor
is attached to in Gazebo. In order to get a more realistic pressure measurement,
awgn is added to the generated pressure data. The variance corresponds to the
variance of the real barometer sensors, which is chosen as 14 based on the result
Section 4.7.2.
17 A description of the Gazebo camera sensor can be found at:
http://sdformat.org/spec?ver=1.7&elem=sensor
18 See http://gazebosim.org/tutorials?tut=ros_gzplugins.
C
Gazebo Plugins

This appendix describes the Gazebo plugin developed for the simulation environ-
ment.

C.1 Wind Force Plugin


A plugin was created to apply a wind to the uav in the simulation. The wind was
modelled as a force vector. The wind force is applied in the horizontal plane and
the magnitude of the force vector is characterised as a telegraph process. The
two states of the telegraph process are 0.5 N and 1.0 N. The mean time before
the processes switches to the other state is 6 s. When the plugin is initialised a
random wind direction is chosen. During each iteration of the simulation noise
is added to the direction. An illustration of the wind force can be seen in Figure
C.1.

C.2 Air Resistance Plugin


Gazebo do not apply air resistance to the uav model during flight, therefore a
plugin was created to simulate it. The plugin applies a force in the opposite
direction of the velocity of the uav. The force is computed equally in the x-, y-
and z-direction. In the case of the x-direction, the force is calculated as follows

Fwind, x = −sign(vx ) · kx · vx2 , (C.1)

where vx is the velocity in the x-direction and kx a positive constant.

89
90 C Gazebo Plugins

Force x (N) 1

0.8

0.6

0.4
0 5 10 15 20 25 30 35 40 45
time (s)

-0.2
Force y (N)

-0.4

-0.6
0 5 10 15 20 25 30 35 40 45
time (s)

Figure C.1: Example of the horizontal wind force during a simulation run.

C.3 Random Trajectory Plugin


A plugin was created to enable the ugv to drive in a random pattern during
simulations. When the simulation is initialised the ugv starts moving forward at
a velocity of 4 m/s. Following this, the direction of the ugv either stays constant,
turns +0.2 rad or turns −0.2 rad every 4 s. Each of the direction changes have
equal probability. An outcome of what the resulting trajectory of the ugv could
look like can be seen in Figure C.2.

220

200

180

160

140
y position (m)

120

100

80

60

40

20
40 45 50 55 60 65 70 75 80 85
x position (m)

Figure C.2: Example of the ugv’s trajectory when using the Random Trajec-
tory plugin.
Additional Results
D
This appendix describes additional results from Section 5.4 and 6.1.

D.1 Time Distributions

30
30 45

40
25
25
35
20 30
20

25
15 15
20

10 10 15

10
5 5
5

0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
Time (s) Time (s) Time (s)

(a) All states. (b) APPROACH. (c) FOLLOW and DESCEND.


Mean: 39.5 s. Mean: 20.0 s. Mean: 19.5 s.
Median: 35.0 s. Median: 15.9 s. Median: 19.0 s.

Figure D.1: Time distributions for the 100 simulation runs using true states.
The first subfigure (a) shows the total time distribution of the landing
manuever which is during the APPROACH-, FOLLOW- and DESCEND-state.
The second subfigure (b) shows the distribution for the time spent in the
APPROACH-state. The third (c) subfigure show the time distribution for the
landing phase (FOLLOW- and DESCEND-state).

91
92 D Additional Results

30 25
70

25 60
20

20 50

15
40
15

10 30

10
20
5
5 10

0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
Time (s) Time (s) Time (s)

(a) All states. (b) Approach. (c) Follow and Descend.


Mean: 37.9 s. Mean: 17.4 s. Mean: 20.5 s.
Median: 36.8 s. Median: 16.4 s. Median: 19.9 s.

Figure D.2: Time distributions for the 100 simulation runs using estimated
states states. The first subfigure (a) shows the total time distribution of the
landing manuever which is during the APPROACH-, FOLLOW- and DESCEND-
state. The second subfigure (b) shows the distribution for the time spent in
the APPROACH-state. The third (c) subfigure show the time distribution for
the landing phase (FOLLOW- and DESCEND-state).
D.2 Root Mean Square Error 93

D.2 Root Mean Square Error

30 30

25 25

20 20

15 15

10 10

5 5

0 0
1 2 3 4 5 6 7 0 0.05 0.1 0.15 0.2 0.25
RMSE horizontal position (m) RMSE vertical position (m)

(a) rmse horizontal position. (b) rmse vertical position.


Mean: 4.54 m. Mean: 0.112 m. Standard Deviation:
Standard Deviation: 0.98 m. 0.030 m.

30
25

25
20

20

15

15

10

10

5
5

0
0 0 0.05 0.1 0.15 0.2 0.25
0 0.5 1 1.5
RMSE vertical velocity (m/s)
RMSE horizontal velocity (m/s)

(c) rmse horizontal velocity. (d) rmse vertical velocity.


Mean: 0.877 m/s. Mean: 0.121 m/s.
Standard Deviation: 0.223 m/s. Standard Deviation: 0.040 m/s.

Figure D.3: Distribution of root mean square error (rmse) for both horizon-
tal and vertical position and velocity, during the Approach-state over 100
runs using the camera system.
94 D Additional Results

25 35

30
20

25

15
20

15
10

10

5
5

0 0
0 0.05 0.1 0.15 0.2 0 0.02 0.04 0.06 0.08 0.1
RMSE horizontal position (m) RMSE vertical position (m)

(a) rmse horizontal position. (b) rmse vertical position.


Mean: 0.105 m. Mean: 0.046 m.
Standard Deviation: 0.027 m. Standard Deviation: 0.013 m.

40 40

35 35

30 30

25 25

20 20

15 15

10 10

5 5

0 0
-0.1 0 0.1 0.2 0.3 0.4 0.5 -0.05 0 0.05 0.1 0.15 0.2 0.25
RMSE horizontal velocity (m/s) RMSE vertical velocity (m/s)

(c) rmse horizontal velocity. (d) rmse vertical velocity.


Mean: 0.170 m/s. Mean: 0.085 m/s.
Standard Deviation: 0.061 m/s. Standard Deviation: 0.034 m/s.

Figure D.4: Distribution of root mean square error (rmse) for both hori-
zontal and vertical position and velocity, during the Landing phase over 100
runs using the camera system.
D.2 Root Mean Square Error 95

20 35

18
30
16

25
14

12
20

10

15
8

6
10

4
5
2

0 0
1 2 3 4 5 6 7 8 -0.05 0 0.05 0.1 0.15 0.2 0.25
RMSE horizontal position (m) RMSE vertical position (m)

(a) rmse horizontal position. (b) rmse vertical position.


Mean: 4.46 m. Mean: 0.112 m.
Standard Deviation: 0.87 m. Standard Deviation: 0.042 m.

35
18

16
30

14
25
12

20
10

8
15

6
10
4

5
2

0
0 -0.05 0 0.05 0.1 0.15 0.2 0.25 0.3
0 0.5 1 1.5 2 2.5
RMSE vertical velocity (m/s)
RMSE horizontal velocity (m/s)

(c) rmse horizontal velocity. (d) rmse vertical velocity.


Mean: 0.924 m/s. Mean: 0.125 m/s.
Standard Deviation: 0.293 m/s. Standard Deviation: 0.044 m/s.

Figure D.5: Distribution of root mean square error (rmse) for both horizon-
tal and vertical position and velocity, during the Approach-state over 100
runs without the camera system.
96 D Additional Results

35 50

45
30
40

25 35

30
20

25

15
20

10 15

10
5
5

0 0
0 0.1 0.2 0.3 0.4 -0.05 0 0.05 0.1 0.15 0.2
RMSE horizontal position (m) RMSE vertical position (m)

(a) rmse horizontal position. (b) rmse vertical position.


Mean: 0.174 m. Mean: 0.057 m.
Standard Deviation: 0.055 m. Standard Deviation: 0.020 m.

30 50

45

25
40

35
20

30

15 25

20

10
15

10
5

0 0
-0.1 0 0.1 0.2 0.3 0.4 0.5 -0.1 0 0.1 0.2 0.3 0.4 0.5
RMSE horizontal velocity (m/s) RMSE vertical velocity (m/s)

(c) rmse horizontal velocity. (d) rmse vertical velocity.


Mean: 0.203 m/s. Mean: 0.094 m/s.
Standard Deviation: 0.079 m/s. Standard Deviation: 0.054 m/s.

Figure D.6: Distribution of root mean square error (rmse) for both hori-
zontal and vertical position and velocity, during the Landing phase over 100
runs without the camera system.
Bibliography

[1] Nikolas Giakoumidis, Jin U. Bak, Javier V. Gómez, Arber Llenga, and Niko-
laos Mavridis. Pilot-scale development of a UAV-UGV hybrid with air-based
UGV path planning. Proceedings - 10th International Conference on Fron-
tiers of Information Technology, FIT 2012, (December):204–208, 2012. doi:
10.1109/FIT.2012.43.

[2] Jouni Rantakokko, Erik Axell, Niklas Stenberg, Jonas Nygårds, and Joakim
Rydell. Tekniker för navigering i urbana och störda GNSS-miljöer. Technical
Report FOI-R–4907–SE, Ledningssystem, Totalförsvarets Forskningsinstitut
(FOI), December 2019.

[3] Andrew J. Kerns, Daniel P. Shepard, Jahshan A. Bhatti, and Todd


E. Humphreys. Unmanned Aircraft Capture and Control Via GPS Spoofing.
Journal of Field Robotics. Journal of Field Robotics, 31(4):617–636, 2014.
doi: 10.1109/ICIF.2008.4632328.

[4] Sinan Gezici, Zhi Tian, Georgios B. Giannakis, Hisashi Kobayashi, Andreas F.
Molisch, H. Vincent Poor, and Zafer Sahinoglu. Localization via ultra-
wideband radios: A look at positioning aspects of future sensor networks.
IEEE Signal Processing Magazine, 22(4):70–84, 2005. ISSN 10535888. doi:
10.1109/MSP.2005.1458289.

[5] Fabrizio Lazzari, Alice Buffi, Paolo Nepa, and Sandro Lazzari. Numerical in-
vestigation of an UWB localization technique for unmanned aerial vehicles
in outdoor scenarios. IEEE Sensors Journal, 17(9):2896–2903, 2017.

[6] Linnea Persson. Cooperative Control for Landing a Fixed-Wing Unmanned


Aerial Vehicle on a Ground Vehicle. Master’s thesis, KTH Royal Institute of
Technology, 2016.

[7] Linnea Persson. Autonomous and Cooperative Landings Using Model Pre-
dictive Control. Licentiate thesis, KTH Royal Institute of Technology, Stock-
holm, 2019.

[8] Alexandre Borowczyk, Duc Tien Nguyen, André Phu Van Nguyen,
Dang Quang Nguyen, David Saussié, and Jerome Le Ny. Autonomous land-

97
98 Bibliography

ing of a quadcopter on a high-speed ground vehicle. Journal of Guid-


ance, Control, and Dynamics, 40(9):2373–2380, 2017. ISSN 07315090. doi:
10.2514/1.G002703.
[9] Tomas Baca, Petr Stepan, Vojtech Spurny, Daniel Hert, Robert Penicka,
Martin Saska, Justin Thomas, Giuseppe Loianno, and Vijay Kumar. Au-
tonomous landing on a moving vehicle with an unmanned aerial vehi-
cle. Journal of Field Robotics, 36(5):874–891, 2019. ISSN 15564967. doi:
10.1002/rob.21858.
[10] Sven Lange, Niko Sünderhauf, and Peter Protzel. A vision based onboard ap-
proach for landing and position control of an autonomous multirotor UAV
in GPS-denied environments. 2009 International Conference on Advanced
Robotics, ICAR 2009, 2009.
[11] Karl Engelbert Wenzel, Andreas Masselli, and Andreas Zell. Automatic take
off, tracking and landing of a miniature UAV on a moving carrier vehicle.
Journal of Intelligent and Robotic Systems: Theory and Applications, 61
(1-4):221–238, 2011. ISSN 09210296. doi: 10.1007/s10846-010-9473-0.
[12] Stephen Chaves, Ryan Wolcott, and Ryan Eustice. NEEC Research: Toward
GPS-denied Landing of Unmanned Aerial Vehicles on Ships at Sea. Naval
Engineers Journal, 127(1):23–35, 2015. ISSN 0028-1425.
[13] Oualid Araar, Nabil Aouf, and Ivan Vitanov. Vision Based Autonomous
Landing of Multirotor UAV on Moving Platform. Journal of Intelligent
and Robotic Systems: Theory and Applications, 85(2):369–384, 2017. ISSN
15730409. doi: 10.1007/s10846-016-0399-z. URL http://dx.doi.org/
10.1007/s10846-016-0399-z.
[14] Janis Tiemann, Florian Schweikowski, and Christian Wietfeld. Design of an
UWB indoor-positioning system for UAV navigation in GNSS-denied envi-
ronments. 2015 International Conference on Indoor Positioning and Indoor
Navigation, IPIN 2015, pages 1–7, 2015. doi: 10.1109/IPIN.2015.7346960.
[15] Thien Minh Nguyen, Abdul Hanif Zaini, Kexin Guo, and Lihua Xie. An
Ultra-Wideband-based Multi-UAV Localization System in GPS-denied envi-
ronments. In International Micro Air Vehicles Conference, 2016.
[16] P. Nordin and J. Nygårds. Local navigation using traversability maps. IFAC
Proceedings Volumes (IFAC-PapersOnline), 7(PART 1):324–329, 2010. ISSN
14746670. doi: 10.3182/20100906-3-it-2019.00057.
[17] Peter Nordin, Lars Andersson, and Jonas Nygårds. Results of the
tais/prerunners-project. In Fourth Swedish Workshop on Autonomous
Robotics SWAR’09, pages 60–61, 2009.
[18] James Diebel. Representing attitude: Euler angles, unit quaternions, and
rotation vectors. Matrix, 58:1–35, 2006. ISSN 14602431. doi: 10.1093/jxb/
erm298. URL ftp://sbai2009.ene.unb.br/Projects/GPS-IMU/
George/arquivos/Bibliografia/79.pdf.
Bibliography 99

[19] Robert Mahony, Vijay Kumar, and Peter Corke. Multirotor aerial vehicles:
Modeling, estimation, and control of quadrotor. IEEE Robotics and Au-
tomation Magazine, 19(3):20–32, 2012. ISSN 10709932. doi: 10.1109/MRA.
2012.2206474.

[20] Rudolph Emil Kalman. A new approach to linear filtering and prediction
problems. ASME Journal of Basic Engineering, 1960.

[21] Andrew H. Jazwinski. Stochastic processes and filtering theory. Num-


ber 64 in Mathematics in science and engineering. Academic Press, New
York, 1970. ISBN 0123815509.

[22] Guobin Chang. Robust Kalman filtering based on Mahalanobis distance as


outlier judging criterion. Journal of Geodesy, 88(4):391–401, 2014. ISSN
14321394. doi: 10.1007/s00190-013-0690-8.

[23] Yoichi Morales, Eijiro Takeuchi, and Takashi Tsubouchi. Vehicle localization
in outdoor woodland environments with sensor fault detection. Proceedings
- IEEE International Conference on Robotics and Automation, pages 449–
454, 2008. ISSN 10504729. doi: 10.1109/ROBOT.2008.4543248.

[24] Fredrik Gustafsson. Statistical sensor fusion. Studentlitteratur, 2010.

[25] Amir Beck, Petre Stoica, and Jian Li. Exact and approximate solutions of
source localization problems. IEEE Transactions on Signal Processing, 56
(5):1770–1778, 2008. ISSN 1053587X. doi: 10.1109/TSP.2007.909342.

[26] OpenCV. Camera Calibration and 3D Reconstruction, 2019. URL


https://docs.opencv.org/2.4/modules/calib3d/doc/camera_
calibration_and_3d_reconstruction.html.

[27] Duane Brown. Close-range camera calibration, 1971.

[28] Martin Enqvist, Torkel Glad, Svante Gunnarsson, Peter Lindskog, Lennart
Ljung, Johan Löfberg, Tomas McKelvey, Anders Stenman, and Jan-Erik
Strömberg. Industriell reglerteknik kurskompendium. Linköpings Univer-
sitet, Linköping, 2014.

[29] Elsevier Science & Technology. Missile Guidance and Pursuit : Kinemat-
ics, Dynamics and Control. Elsevier Science & Technology, 1998. ISBN
9781904275374.

[30] Guangwen Liu, Masayuki Iwai, Yoshito Tobe, Dunstan Matekenya, Khan
Muhammad Asif Hossain, Masaki Ito, and Kaoru Sezaki. Beyond horizontal
location context: Measuring elevation using smartphone’s barometer. Ubi-
Comp 2014 - Adjunct Proceedings of the 2014 ACM International Joint Con-
ference on Pervasive and Ubiquitous Computing, pages 459–468, 2014. doi:
10.1145/2638728.2641670.
100 Bibliography

[31] Mustafa Cavcar. International Standard Atmosphere, in BS. Fluid Mechan-


ics in Channel, Pipe and Aerodynamic Design Geometries 2, pages 251–258,
2018. doi: 10.1002/9781119457008.app5.
[32] Kexin Guo, Zhirong Qiu, Cunxiao Miao, Abdul Hanif Zaini, Chun-Lin Chen,
Wei Meng, and Lihua Xie. Ultra-Wideband-Based Localization for Quad-
copter Navigation. Unmanned Systems, 04(01):23–34, 2016. ISSN 2301-
3850. doi: 10.1142/s2301385016400033.

[33] James Bowman and Patrick Mihelich. How to Calibrate a Monocular


Camera, 2019. URL http://wiki.ros.org/camera_calibration/
Tutorials/MonocularCalibration.
[34] Edwin Olson. AprilTag: A robust and flexible visual fiducial system. Pro-
ceedings - IEEE International Conference on Robotics and Automation,
pages 3400–3407, 2011. ISSN 10504729. doi: 10.1109/ICRA.2011.5979561.
[35] John Wang and Edwin Olson. AprilTag 2: Efficient and robust fiducial de-
tection. IEEE International Conference on Intelligent Robots and Systems,
2016-Novem:4193–4198, 2016. ISSN 21530866. doi: 10.1109/IROS.2016.
7759617.

[36] Danylo Malyuta. Guidance, Navigation, Control and Mission Logic for
Quadrotor Full-cycle Autonomy. Master’s thesis, ETH Zurich, 2018.
[37] Maximilian Krogius, Acshi Haggenmiller, and Edwin Olson. Flexible Lay-
outs for Fiducial Tags. IEEE International Conference on Intelligent Robots
and Systems, pages 1898–1903, 2019. ISSN 21530866. doi: 10.1109/
IROS40897.2019.8967787.
[38] Alvika Gautam, P. B. Sujit, and Srikanth Saripalli. Application of guidance
laws to quadrotor landing. 2015 International Conference on Unmanned
Aircraft Systems, ICUAS 2015, pages 372–379, 2015. doi: 10.1109/ICUAS.
2015.7152312.

[39] Ruoyu Tan and Manish Kumar. Proportional navigation (PN) based track-
ing of ground targets by quadrotor UAVs. ASME 2013 Dynamic Sys-
tems and Control Conference, DSCC 2013, 1(October 2013), 2013. doi:
10.1115/DSCC2013-3887.

[40] Decawave. DWM1001 Firmware Application Programming Interface (Api)


Guide. pages 1–74, 2017.

You might also like