Reasearch Paper
Reasearch Paper
Autonomous Landing of an
Unmanned Aerial Vehicle
on an Unmanned Ground
Vehicle in a GNSS-denied
scenario
iii
Acknowledgments
We would like to thank our supervisors Jouni Rantakokko, Jonas Nygårds and
Joakim Rydell at foi, who have all shown interest in this work and helped us
throughout this master’s thesis project. Special thanks goes to Jouni Rantakokko
for his thorough work in proofreading and giving criticisms on this thesis, and
Joakim Rydell who assisted in the experimental test.
At Linköping Univerisity, we would like to thank our supervisor Anton Kull-
berg for his help with this thesis. We would also like to take the opportunity to
thank our opponents Carl Hynén Ulfsjö and Theodor Westny for their construc-
tive comments. Finally, we would like to thank our examinator Gustaf Hendeby
for his great interest throughout this master’s thesis project as well as his aid
during the experimental test.
As a closing remark, we would like to praise both foi and Linköping Uni-
versity for doing their very best in making sure that this master’s thesis could
proceed, despite the covid-19 pandemic.
v
Contents
Notation xi
1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Outline of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 System Description 7
2.1 Unmanned Aerial Vehicle . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.1 Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.2 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Unmanned Ground Vehicle . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.1 Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.2 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Software Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 Robot Operating System . . . . . . . . . . . . . . . . . . . . 11
2.3.2 ArduCopter Flight Controller . . . . . . . . . . . . . . . . . 11
2.4 Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . . 11
3 Theory 13
3.1 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Rotation Representations . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3 Quadcopter Dynamical Model . . . . . . . . . . . . . . . . . . . . . 15
3.4 Kalman Filter Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.4.1 Measurement Outliers . . . . . . . . . . . . . . . . . . . . . 17
3.4.2 Measurement Information Gain . . . . . . . . . . . . . . . . 18
3.5 Position Trilateration . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.6 Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.7 Proportional-Integral-Derivative Control . . . . . . . . . . . . . . . 20
vii
viii Contents
4 Estimation System 23
4.1 Estimation System Description . . . . . . . . . . . . . . . . . . . . 23
4.2 Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3 Attitude Measurements . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.4 Ultra-Wide Band Sensor Network . . . . . . . . . . . . . . . . . . . 26
4.4.1 Sensor Model . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4.2 Model Parameter Tests . . . . . . . . . . . . . . . . . . . . . 27
4.4.3 Anchor Configuration . . . . . . . . . . . . . . . . . . . . . 30
4.5 Camera System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.5.1 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.5.2 Fiducial Detection Algorithm . . . . . . . . . . . . . . . . . 36
4.5.3 Fiducial Marker . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.5.4 Coordinate Transform . . . . . . . . . . . . . . . . . . . . . 38
4.5.5 Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.6 Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.7 Reference Barometer . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.7.1 Sensor Model . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.7.2 Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5 Control System 47
5.1 Control System Description . . . . . . . . . . . . . . . . . . . . . . 47
5.2 State Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2.2 Follow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2.3 Descend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.3.1 Vertical Controller . . . . . . . . . . . . . . . . . . . . . . . 53
5.3.2 Horizontal pid Controller . . . . . . . . . . . . . . . . . . . 53
5.3.3 Proportional navigation Control . . . . . . . . . . . . . . . . 55
5.4 Control System Evaluations . . . . . . . . . . . . . . . . . . . . . . 57
A Hardware Drivers 85
A.1 XSens Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
A.2 Camera Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
A.3 Decawave data extraction . . . . . . . . . . . . . . . . . . . . . . . . 85
Contents ix
B Simulated Sensors 87
B.1 Attitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
B.2 Ultra-Wide Band Sensor Network . . . . . . . . . . . . . . . . . . . 87
B.3 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
B.4 Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
B.5 Reference Barometer . . . . . . . . . . . . . . . . . . . . . . . . . . 88
C Gazebo Plugins 89
C.1 Wind Force Plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
C.2 Air Resistance Plugin . . . . . . . . . . . . . . . . . . . . . . . . . . 89
C.3 Random Trajectory Plugin . . . . . . . . . . . . . . . . . . . . . . . 90
D Additional Results 91
D.1 Time Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
D.2 Root Mean Square Error . . . . . . . . . . . . . . . . . . . . . . . . 93
Bibliography 97
Notation
Notations
Notation Meaning
[x, y, z] Position in a Cartesian coordinate system
[φ, θ, ψ] Euler angles roll, pitch and yaw
{a} Coordinate frame a
aR Rotation matrix from {b} to {a}
b
x State vector
cx cos x
sx sin x
R Covariance matrix
|| · ||n n-norm of ·
ȧ Time derivative of a
⊗ Kronecker product
AT Transpose of matrix A
B Definition statement
xi
xii Notation
Abbreviations
Abbreviation Meaning
awgn Additive White Gaussian Noise
crlb Cramér-Rao Lower Bound
csi Camera Serial Interface
foi Swedish Defence Research Agency
gnss Global Navigation Satellite System
gps Global Positioning System
ekf Extended Kalman Filter
fov Field of View
imu Inertial Measurement Unit
ins Inertial Navigation System
los Line of Sight
mpc Model Predictive Control
nis Normalised Innovation Squared
nlos Non Line of Sight
pd Proportional, Differential (controller)
pdf Probability Density Function
pid Proportional, Integral, Differential (controller)
pn Proportional Navigation
rmse Root Mean Square Error
ros Robot Operating System
rtk Real Time Kinematic
rsta Reconnaissance, Surveillance and Target Acquisition
sitl Software In The Loop
toa Time of Arrival
uav Unmanned Aerial Vehicle
ugv Unmanned Ground Vehicle
uwb Ultra-Wide Band
Introduction
1
This master’s thesis project was conducted at the Swedish Defence Research Agency
(foi) at the Division of C4ISR1 in Linköping. The main goal of the project was to
investigate the possibility of landing a quadrotor Unmanned Aerial Vehicle (uav)
on top of a mobile Unmanned Ground Vehicle (ugv) in a gnss-denied scenario.
The purpose of this chapter is to present the background as to why the project
was conducted, related work done in the research community, the approach of the
thesis, the problems the thesis aims to answer and the limitations and contribu-
tions of the project. Finally, an outline of the thesis is provided.
1.1 Background
During the last couple of decades, the interest for, as well as the research and de-
velopment (R&D) on, unmanned and autonomous systems has grown rapidly. In
this thesis, autonomous systems are defined as systems which can operate with-
out direct human control or intervention. The various R&D activities have re-
sulted in unmanned systems that can perform increasingly more complex tasks
and support military operations in all domains (air, ground and sea). Currently,
the R&D community is exploring various approaches for combining different
types of unmanned platforms, which can complement each other’s abilities. One
such conceivable scenario is an unmanned ground vehicle (ugv) acting as a host
platform for an unmanned aerial vehicle (uav) to carry out various missions, as
explored in [1]. The underlying idea is to exploit the advantages while mitigating
the disadvantages for both types of vehicles. The ground vehicle can continue to
operate during harsh weather conditions and has the endurance to operate for an
extensive period. In contrast, small uav:s have limited endurance. The benefit
1 Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance
1
2 1 Introduction
of the uav is instead that it can move at higher speeds, can cover large areas and
provide an overview of the surrounding environment. The uav may also act as
an elevated communication relay. Additionally, the ugv acts as a transport and
recharge station for the uav.
The task of developing a combined uav and ugv reconnaissance, surveillance
and target acquisition (rsta) system has several non-trivial research challenges,
which each could encapsulate a single thesis project. Therefore, the challenge
which this master’s thesis is exploring concerns the autonomous landing of the
uav on top of a moving ugv.
In order to perform an autonomous landing, a basic requirement is to esti-
mate the relative position and orientation of the uav with respect to the ugv.
The localisation is performed with sensors on the uav as well as on the ugv. Be-
cause of the military application and the importance of the landing phase, there
are requirements on the sensors regarding robustness as well as low probability
of detection. Additionally, a future goal is for the uav to be able to land at night
time as well, which should be considered in the choice of sensors. One common
approach to localise a uav is with the use of global navigation satellite system
(gnss) receivers. Though gnss-receivers are small, low cost and readily avail-
able, they do come with some flaws. In urban environments gnss signals can be
subjected to multi-path propagation, attenuation and diffraction which can im-
pair the accuracy of the gnss-receivers [2]. gnss-receivers are also sensitive to
jamming and spoofing. For example Kerns et al. [3] were able to send a drone into
an unwanted dive through spoofing of a global positioning system (gps) signal.
Because of these flaws, this thesis will investigate the possibility of autonomously
landing a uav on a moving ugv in a gnss-denied scenario.
A localisation technique that has gained attention lately, is the use of ultra-
wide band (uwb) impulse radios. uwb radios send data at bandwidths surpassing
500 MHz. The large bandwidth has many advantages, such as being insensitive
to disturbances as well as not being affected by multipath propagation [4]. Addi-
tionally, it results in a low power spectral density, which reduces the risk of being
detected. A thorough description of how uwb radios work and how they can be
used for localisation is provided in [4]. The standard approach of localisation
using uwb radios is based on having multiple stationary uwb radios (anchors)
placed in an environment in which a mobile uwb radio (tag) is to be located. The
distances between the tag and the anchors are measured and used to calculate an
estimate of the tag’s position relative to the anchors. When combining the dis-
tance measurements, a 3D position can be triangulated with a minimum of four
anchors [5].
on a car moving at 70 km/h. During the landings a pid control structure was
used. The position estimation of the vehicles was achieved using two real time
kinematic (rtk) gps receivers together with a camera system measuring the rela-
tive position between the vehicles. The work was extended in [7] where a model
predictive control (mpc) structure was used instead.
In [8] a similar result was achieved with a quadrotor uav. Borowczyk et al.
managed to land the quadrotor on a car travelling at speeds up to 50 km/h. In this
case a proportional navigation (pn) controller was used to approach the car. A
switching strategy was used, where a proportional-derivative (pd) controller was
used during the landing phase. An inertial measurement unit (imu), a camera
and gps receivers were used together with a Kalman filter to estimate the motion
of the two vehicles.
In 2017, the International Robotics Challenge (MBZIRC) competition was or-
ganised with the goal of expanding the state-of-the-art in a variety of robotics do-
mains [9]. One challenge in the competition was to autonomously land a uav on
a moving ugv (15 km/h). The system Baca et al. [9] participated with achieved
the shortest landing time. In their solution a mpc structure was used. To get a
state estimate of the uav’s movements, inertial sensors, a rangefinder, a camera
and an rtk gps receiver was used. However, no sensors could be placed on the
ugv. Baca et al. therefore used a prediction strategy for the ugv’s movements.
Although these research groups achieved the goal of autonomous landing, all the
proposed approaches rely on gnss-receivers.
Autonomous landing of a uav in a gnss-denied scenario is an active research
field, primary considering confined indoor environments, and the concept have
been investigated by various research groups. The most common strategy is posi-
tioning based on visual data, usually a camera identifying a fiducial marker. In
[10] a vision-based detection algorithm is used to estimate the 3D-position of a
uav relative a tag. Wenzel et al. [11] utilise an IR-camera mounted on a uav
to track a pattern of IR-lights on a moving platform. The AprilTag fiducial algo-
rithm is used with a camera for positioning during an autonomous landing of a
uav in [12] and [13]. The common denominator of these works is that they all
utilise a proportional-integral-derivative (pid) controller in order to accomplish
the autonomous landing. The focus on visual data in these works means that they
can only land from a starting position in a close proximity of the ugv, since the
ugv must be within the camera’s field of view (fov) during the majority of the
landing.
An approach to gnss-denied positioning that works at longer distances is by
using uwb anchors positioning a tag in a sensor network. For example, [14, 15]
examined the specific task of estimating the position of a uav with uwb radios.
These articles show successful results; however, the positioning is conducted in
a closed environment where the tag (on the uav) is positioned in the volume
spanned by the anchors. Such an approach is not feasible in an arbitrary environ-
ment. Lazzari et al. [5] numerically investigate the approach of having anchors
mounted on a ugv and a tag mounted on a uav. The results from [5] are promis-
ing, with 3D-positioning of the uav achieved at distances up to 80 m.
4 1 Introduction
1.3 Approach
In this work, the estimation techniques used in gnss-denied scenarios are com-
bined with control approaches that has proven to work well in challenging real-
world scenarios. The landing system is divided into an estimation system and a
control system. The estimation system has the goal of estimating the relative posi-
tion between the vehicles. The control system utilises these estimates to steer the
uav towards the ugv, and eventually land.
The approach in the estimation system is to substitute gnss-receivers with
a uwb sensor network. The uwb sensor network consists of four uwb anchors
mounted on the ugv measuring the distance to a uwb tag, which is mounted on
the uav. To aid in the relative positioning at shorter distances, a camera system
consisting of a uav-mounted camera identifying a fiducial marker on the ugv
is added. Additionally, an imu is mounted on each vehicle. The imu provides
3D accelerations, rotation rates as well as magnetic field and air pressure (scalar)
measurements. An extended Kalman filter (ekf) utilises these sensors to estimate
a relative position between the vehicles.
The proposed control system is mainly based on [8]. Two control laws are
used. A pn law is used to approach the ugv. At closer distances, the control
system switches to a pid controller with the goal of matching the movement of
the ugv, while descending towards it.
Also, since the long-term goal is a system that can land independent of weather
conditions or time of day, the following question is also interesting:
1.5 Limitations
This thesis will only consider the landing phase for the uav, which is when the
uav is at most 50 m away from the ugv. The ugv is therefore assumed to be
in line of sight of the uav during the entire landing task. Additionally, it is
assumed that the landing takes place in an obstacle-free environment. The ugv
is also assumed to be moving only in the horizontal plane. Finally, the landing
1.6 Contributions 5
system can only control the uav when performing the landing manoeuvre. These
limitations do not restrict the core problem of autonomously landing a uav in a
gnss-denied scenario.
1.6 Contributions
Due to the complexity of the task, openly available software is utilised whenever
available. The main contribution of the thesis is the selection and evaluation of
estimation and control algorithms, as well as the design and software integration
of the system. More specifically, the contributions in this thesis are the following:
• Evaluation of uwb radios performance.
• Analysis of uwb anchor configuration using crlb.
• Design of a complete landing system, consisting of an estimation and a
control system.
• Evaluation of the landing system in a realistic simulation environment.
• Experimental evaluation of the estimation system’s performance during the
final descend.
With regard to the personal contributions, Albin Andersson Jagesten was re-
sponsible for integrating the simulation environment, implementing the landing
system and the development of the Control System. Alexander Källström had the
responsibility of the development of the Estimation System, the evaluation of the
uwb radios, analysis of the uwb anchor configuration as well as the experimental
evaluation of the estimation system. However, both authors were involved in the
development of all the various parts of this master’s thesis project.
2.1.1 Platform
The uav used in this project is a quadrotor platform that can be seen in Figure
2.2. The uav has the following sensors mounted on it:
2 For a detailed description, see: https://docs.px4.io/v1.9.0/en/flight_controller/pixhawk_mini.html
3 Jetson Nano Developer Kit documentation: https://developer.nvidia.com/embedded/jetson-
nano-developer-kit
7
8 2 System Description
UAV
Sensors
IMU
Sensor Data
UWB Sensor Data Jetson Pixhawk
Control Commands Motors
Tag Nano Mini
Camera
Figure 2.1: Overview of the components that make up the landing system.
The sensors are mounted on a platform, see Figure 2.3, which in turn is mounted
underneath the uav. The sensors are connected to the Jetson Nano processor unit,
which is also mounted on the platform. The drivers used to communicate with
the sensors are described in Appendix A.
2.1.2 Sensors
The uav is equipped with a Decawave DWM1001 uwb module acting as a uwb
tag in the uwb sensor network. The DWM1001 contains a uwb-transceiver as
well as a processing unit. The uwb tag measures the individual distances to the
anchors in the uwb sensor network.
The Raspberry Pi Camera Module V2 is a camera facing downwards from the
uav. The camera has a horizontal field of view (fov) of 62.2◦ and is configured
to send images with a dimension of 640 × 480 pixels at a rate of 10 Hz.
The Xsens MTi-100 imu is utilised for its accelerometer and barometer mea-
surements. The three-axis accelerometer measures the acceleration vector of the
uav and the barometer measures the pressure of the air around the uav.
As presented in Figure 2.1, the Jetson Nano receives sensor data from the
Pixhawk Mini, which is in the form of an attitude estimate of the uav.
2.1 Unmanned Aerial Vehicle 9
Jetson Nano
Camera
IMU
UWB Tag
2.2.1 Platform
The ugv is a modified electrical wheelchair. It has been used as a test platform
in previous research at foi [16, 17]. On top of the wheelchair a landing platform
is mounted with dimensions 1.5 x 1.5 m2 . A fiducial marker with dimensions 1.4
x 1.4 m2 is placed on the centre of the landing platform. Four 0.5 m long beams
pointing upwards are attached to each of the platform’s corners. The beams are
used for sensor placement. The following sensors have been mounted on the
ugv:
Figure 2.4: ugv platform with four uwb anchors and an AprilTag fiducial
marker. The uwb anchors are highlighted with red.
2.3 Software Tools 11
2.2.2 Sensors
The four Decawave DWM1001 uwb modules are used as anchors in the uwb
sensor network. The uwb modules are attached to the landing platform corners,
two in level with the platform and two on top of the beams, see Figure 2.4.
The Xsens imu provides acceleration and air pressure measurements, as well
as estimates of the ugv orientation.
X
{UGV}
Z
Y
. {UGV}
Z
Y
X
UWB 3 UWB 2
UWB 4
UWB 3
Figure 3.1: The placement of the UWB sensors in the {UGV} frame.
13
14 3 Theory
Figure 3.2: The three main coordinate frames. {W} denotes the world frame ,
{UAV} denote the body fixed coordinate frame of the uav and {UGV} denote
the body fixed coordinate frame of the ugv.
UAV Front
Camera IMU
Y X Z
.
Z X
. Y
X
{UAV}
Z
Y
Figure 3.3: The camera and imu placement and orientation in the {UAV}
frame. The view is from underneath.
3.2 Rotation Representations 15
n sin( α2 )
" #
q= . (3.4)
cos( α2 )
The conversion from Euler angles to a unit quaternion vector is
x = sin(φ/2) cos(θ/2) cos(ψ/2) − cos(φ/2) sin(θ/2) sin(ψ/2) (3.5a)
y = cos(φ/2) sin(θ/2) cos(ψ/2) + sin(φ/2) cos(θ/2) sin(ψ/2) (3.5b)
z = cos(φ/2) cos(θ/2) sin(ψ/2) − sin(φ/2) sin(θ/2) cos(ψ/2) (3.5c)
w = cos(φ/2) cos(θ/2) cos(ψ/2) + sin(φ/2) sin(θ/2) sin(ψ/2) (3.5d)
modelled more thoroughly, but in this thesis they are simplified to the orientation
of the uav as well as a total thrust vector along the z-axis of the body-frame. The
dynamical model has the purpose of describing the resulting dynamics from the
orientation and thrust vector.
Let (x, y, z) be the position of the uav in the {W}-frame, (φ, θ, ψ) the roll,
pitch and yaw angle of the uav with respect to the {W}-frame. Furthermore, T
denotes the total thrust generated by the rotors, i.e. the magnitude of the force
vector propelling the uav along the direction of the z-axis in the {UAV}-frame. A
second force vector affecting the uav is from the force of gravity. Mahony et al.
[19] present a model of the resulting dynamics from the two force vectors as the
combined accelerations expressed in the {W}-frame. The model is
ẍ 0 0
0 W
m = m + RUAV 0 ,
ÿ
(3.6)
z̈ −g T
Using the first order Taylor expansion, µ and Σ are estimated with the following
recursive scheme
dg0 (x, u)
Gt = x=µ̂t , u=ut
(3.12a)
dx
dh0 (x)
Ht = . (3.12b)
dx x=µ̂t
The recursions start from the initial guess of µ̂0 and Σ̂0 . The equations (3.11a)
and (3.11b) are called the prediction step. These equations propagate the system
dynamics through time using the dynamical model. This step predicts the cur-
rent state of the system (µ̂t , Σ̂t ). During the prediction step, the uncertainty in
the estimate grows, i.e. the covariance matrix gets larger.
The second step incorporates the measurements in the estimate and is called
the measurement step. In this step the prediction is corrected by the measure-
ment and the confidence in the estimate increases. The measurement step con-
sists of (3.11c)-(3.11f) and returns the estimates (µ̃t , Σ̃t ). The final step in (3.11f)
is expressed in the Joseph form. The Joseph form is equivalent to the Σ̃t calcula-
tion of the standard ekf, however, it is stated in a numerically stable expression
[22].
The nis is χ2 -distributed, where the degrees of freedom are equal to the number
of measurements in the measurement vector zt . A confidence level of the χ2 -
distribution is used to classify the measurements as either a valid sample or as an
outlier.
ri = kx − ai k2 + i , (3.16)
where 2 2
−2a1T
1 r1 − ka1 k2
A = ...
.. , b =
..
. (3.18)
. .
−2aTm kam k22
1
2 −
rm
The solution is then given by
f Z
Figure 3.4: Pinhole model illustration where f is the focal length and Z is the
distance to the camera.
The goal of the camera model is to map global coordinates [X, Y , Z]T to pixel
coordinates [u, v]T . The first step in the model is to transform the global coordi-
nates to the camera frame using the extrinsic parameters as follows
x X
y = R Y + t , (3.20)
z Z
where [x, y, z]T are the coordinates in the camera frame. The matrix R and the
vector t describe a rotation and a translation respectively, which together encapsu-
lates the extrinsic parameters. The next step is to project [x, y, z]T to a normalised
image plane using a perspective transformation as follows
The next step in the model is to account for radial and tangential distortions with
the plumb bob model [27]
h i h i
x00 = 1 + k1 r 2 + k2 r 4 + k3 r 6 x0 + 2p1 x0 y 0 + p2 r 2 + 2x02 (3.22a)
h i h i
00 2 4 6 0 2 02 0 0
y = 1 + k1 r + k2 r + k3 r y + p1 r + 2y + 2p2 x y (3.22b)
q
r = x02 + y 02 , (3.22c)
20 3 Theory
where k1 ,k2 and k3 are radial coefficients and p1 and p2 are tangential distortion
coefficients. The pixel coordinates u and v are finally obtained as
where fx and fy describe the focal lengths in pixel coordinates. The constants cx
and cy describe the centre of the image in pixel coordinates. To summarise, the
intrinsic parameters are: k1 , k2 , k3 , p1 , p2 , fx , fy , cx and cy .
Zt
de(t)
u(t) = Kp e(t) + Ki e(τ)dτ + Kd (3.24)
dt
0
Kp e(t)
de(t)
Kd dt
collide with the target if the line of sight (los) vector between them remains
constant and that the distance is decreasing. This is achieved by generating an
acceleration target a⊥ for the pursuer, which is perpendicular to the los-vector,
such that when the target and the pursuer are moving, the los-vector remains
the same.
Let the position and velocity of the pursuer be pP and vP . Additionally, let the
position and velocity of the target be pT and vT . Then, let the relative position
and velocity between the vehicles be prel = pT − pP and vrel = vT − vP . The
acceleration a⊥ is then computed as follows
prel prel × vrel
a⊥ = −λ|vrel | ×Ω, Ω= , (3.25)
|prel | prel · prel
where λ is a positive gain parameter controlling the rate of rotation. See Figure
3.6 for an illustration.
where T is the air temperature, zrel is the difference in altitude between point A
and point B, while PA and PB is the pressure at A and B respectively. Furthermore,
L is the temperature lapse rate7 , g is the gravitational acceleration and R is the
real gas constant for air. The equation is valid up to an absolute altitude of 11 km
and can be inverted such that pressure is obtained from an altitude. The values
of L, R and g are given in [31] which presents the Standard Atmosphere Mean
Sea Level Conditions. From [31] the mean temperature at sea level T̄ as well as
the mean pressure at sea P¯s . The constants are presented in Table 3.1.
7 The rate at which the temperature in the atmosphere falls with altitude.
22 3 Theory
Target
Pursuer
L R g T̄ P¯s
Value -0.0065 287.04 9.80665 288 101.325
K m2 m
Unit m K · s2 s2
K kPa
Estimation System
4
The goal of the estimation system is to provide an estimate of the relative position
and velocity between the uav and the ugv. This chapter starts with an overview
of the estimation system in Section 4.1, the underlying filter structure used for
the estimation is described in Section 4.2 and a detailed description of the sensors
that are used by the estimation system is presented in Sections 4.4–4.7.
• Attitude measurements of the uav and ugv from the flight control unit as
well as the imu on the ugv, see Section 4.3.
• Distance measurements between the tag and the four anchors in uwb sen-
sor network, see Section 4.4.
• Relative position measurements from the camera system, see Section 4.5.
• Accelerometer measurements from the imu on the uav and ugv, see Sec-
tion 4.6.
23
24 4 Estimation System
• Barometer measurements from the imu on both vehicles, see Section 4.7.
The sensors are sampling data at different frequencies, the sampled data has
different accuracy and the sensors measure different quantities. In order to ac-
count for these differences, and combine the sensor measurements into estimates,
an ekf is used. The ekf takes the sensor data and outputs an estimate of the rel-
ative position and velocity of the uav with respect to the ugv in the {W}-frame.
An overview of the estimation system is presented in Figure 4.1
The input of the model, u, is the relative acceleration between the uav and ugv.
The relative acceleration is modelled with additive white Gaussian noise (awgn)
as follows h iT
u = ẍ ÿ z̈ + u , u ∼ N (0, Pu ) (4.2)
The dynamical model used in the filter is based on the model used in [8]. How-
ever, instead of estimating the vehicles’ individual movements, the relative move-
ments between the vehicles are estimated. Additionally, this work assumes that
the acceleration enters as an input instead of a state. The dynamical model is
where Ts is the period time of the filter, q a tuning constant and ⊗ the Kronecker
product. The model assumes that the derivative of the acceleration is a zero-
mean Gaussian white noise process with the same variance in all direction, i.e.
[x(3) y (3) z (3) ]T ∼ N (0, q · I3x3 ).
From (3.11a) and (3.11b) follows that the prediction step consists of the fol-
lowing equations:
The prediction step is usually executed with a frequency equal to the frequency
of the sensor with the highest sampling rate. Therefore the frequency of the
prediction step is set at 50 Hz, since the update rate of the accelerometer sensor
is 50 Hz, see Section 4.6. The time period is therefore Ts = 1/50 s.
With the sensors measurement model in (3.10), (3.11c) – (3.11e) gives the
following measurement step
dh(x)
where Ht = dx x=µ and Kt is the Kalman gain. Equation (4.6c) is written in
t
Joseph form, see Section 3.4. The measurement step will be different for each
sensor type since they have different models and measure different states. The
sensor models used in the filter are described in the sections below.
As stated in Section 3.4.1, a sensor measurement might be an outlier and
should in that case be removed. In order to detect and remove invalid sensor
data the nis test is used. The nis is calculated as described in (3.13) and is ap-
proximaly χ2 -distributed. A 95%-confidence level is used to classify the mea-
surements as an outlier or a valid measurement.
The initial guess of the z-position is taken as a mean from a series of altitude
estimates provided by the barometers, see Section 4.7. The mean altitude value
is denoted z0 . To calculate the initial guess in the x- and y-position (x0 , y0 ), two
series of uwb distance measurements are taken from each uwb anchor. From the
measurements, the position is trilaterated using the theory in Section 3.5, only
considering the horizontal position. The initial relative velocity is set to zero.
Hence, the initial guess µ0 is
x0
y
µ0 = 0 . (4.7)
z0
03×1
xW = W Rb xb , (4.9)
where xb ∈ R3 is a vector expressed in a body fixed coordinate system and xW ∈
R3 is the vector expressed in the {W} frame.
Tag-
Anchor- Anchor
Anchor-
Tag Pair 1 Pair
Tag 22
Pair
Anchor 1 Anchor 2
Range 1 Range 2
Position Position Ranges
Position
Tag-
Anchor- Anchor
Anchor-
Tag Pair 3 Pair
Tag 44
Pair
Figure 4.2: Overview of the uwb sensor network. Each anchor-tag pair re-
turns the range between them as well as the position of the anchor in the
{UGV}-frame. The anchor positions are transformed to {W}-frame and sent
with the ranges to the ekf.
4.4 Ultra-Wide Band Sensor Network 27
pW W W
i = pUGV + δ i , (4.11)
where pWUGV is the position of the ugv in the {W}-frame. The distance measure-
ment di between one of the anchors and the tag (mounted on the uav) can then
be expressed as follows
d i = pW W
UAV − pi 2 , (4.12)
where pWUAV is the position of the uav in the {W}-frame. Since x1:3 = pUAV − pUGV ,
the distance di is
di = x1:3 − δW
i 2 . (4.13)
i
However, the distance measurement zUWB from anchor-tag pair i is not without
i
error. In [32], Gou et al. assumed a linear model between the measurement zUWB
and true distance di , i.e.
i
di = ai zUWB + bi , (4.14)
where ai is a scaling factor and bi is the bias. Additionally, assuming zero-mean
awgn, the sensor model of anchor-tag pair i is
i i
x1:3 − δW
i 2
− bi i
zUWB = hUWB (x) + = + , ∼ N (0, RUWB ), (4.15)
ai
i
where is error and RUWB the (1 × 1) error covariance matrix of anchor-tag pair i.
i
The parameters of each model (ai , bi , RUWB ) was determined empirically, which
i
is described in the following section. The Jacobian of hUWB (x) is
i
i dhUWB (x) 1 h i
i
HUWB (x) = = x − δW 01x3 / x1:3 − δW . (4.16)
dx a 1:3 i 2
Each test consisted of an anchor-tag pair (one uwb anchor and the uwb tag),
and a laser measuring tool, which can measure distances with sub-centimetre pre-
cision. However, with the test setup used, unevenness of the ground could cause
errors in the centimetre range. The test was conducted in open space outside to
reduce effects from multipath propagation. The tag and anchor were each placed
on top of cardboard boxes to reduce reflections from the ground, see Figure 4.3.
Figure 4.3: Cardboard box with uwb tag to the left. Cardboard box with
uwb anchor to the right.
The test setup can be seen in Figure 4.4. The ground truth d is calculated as
d = dl − doffset . (4.17)
where dl is the distance from the laser measuring tool and doffset is the distance
between the laser and uwb tag. To achieve an adequate sensor model, the setup
was repeated at 20 different distances, 1-30 m. At each distance the true distance
d was measured and 100 samples of the range measurement r were collected from
the uwb tag. The measurements were then repeated for all anchor-tag pairs. P1
was tested an additional time to examine if the model parameters vary between
tests.
As described in (4.15), the relation between the range measurement r and
ground truth d is
d−b
r= + , ∼ N (0, RUWB ) . (4.18)
a
The parameters (a, b) can be estimated with least squares regression. To estimate
4.4 Ultra-Wide Band Sensor Network 29
Figure 4.4: Model parameter test setup. The uwb anchor-tag pair measures
the distance r between them. The laser measuring tool measures the refer-
ence distance to the cardboard box dl , which is used as an estimate of the
ground truth.
(a, b) from the measurements from a test, the following matrices are formed
1 1
r1 1 d
. .. .
. .
. . .
1 1
r 1 d
100
" #
a
Ar = ... .. ,
. xpar = , yd = ... (4.19)
20 b 20
r 1
d
1
. ..
.
. .
. . .
20 20
d
r100 1
where rjk is the j:th sample from the range measurements at distance d k . The
parameters are then calculated as
The parameters were estimated for each of the five tests, see Table 4.1. The
bias b can vary between tests, with estimated bias values between 1 − 10 cm. Fur-
thermore, examining the separate tests with anchor-tag pair P1 the results are not
consistent over time. We conclude that estimating the model of each individual
anchor-tag pair will not be a fruitful endeavour.
30 4 Estimation System
Table 4.1: Estimated parameters a and b from the data from each test.
anchor-tag pair a b
P1, test 1 1.0030 0.0820
P1, test 2 1.0043 0.0130
P2 1.0035 0.0348
P3 1.0033 0.0659
P4 1.0015 0.1003
0.3
0.25
0.2
0.15
error (m)
0.1
0.05
Figure 4.5: The sample errors for all model parameter tests plotted against
distance.
where R4×4 = RUWB I4×4 is the covariance matrix of the measurement error e and
RUWB is the variance from a uwb range measurement determined in the previous
subsection. The model in (4.24) is nonlinear and is therefore linearized using the
first order term from the Taylor series expansion at a state xj . The resulting model
is 1
HUWB (xj )
2
H (xj )
y = UWB3 x + e . (4.25)
HUWB (xj )
4
HUWB (xj )
When evaluating different anchor configurations, it is not sufficient to com-
pare the crlb at a single state xj . However, a number of states can provide a
statistically significant benchmark. Since the uwb measurements only give posi-
tional information, the velocities are irrelevant and set to zero. Also, since the
uwb sensor network will be used to estimate positions above ground (z ≥ 0),
negative z-positions are not studied. In order to achieve a statistically significant
32 4 Estimation System
12
10
CRLB [m 2 ]
8
0
10 20 30 40
Semi-sphere radius [m]
(a) Conf 1. (b) crlb-means over distance, Conf 1.
12
10
CRLB [m 2 ]
0
10 20 30 40
Semi-sphere radius [m]
12
10
CRLB [m 2 ]
0
10 20 30 40
Semi-sphere radius [m]
Figure 4.7: The points and convex hulls corresponding to each anchor con-
figuration. For each configuration, the crlb-means are also shown.
34 4 Estimation System
From Figures 4.7b, 4.7d and 4.7f it is observed that the CRLBy is basically
identical with CRLBx , i.e. the crlb-mean is identical in each horizontal direc-
tion. Another observation is that the crlb-means seem to grow quadratically
with distance. When studying Figure 4.7b it is clear that Conf 1 leads to essen-
tially no positional information in the z-direction at any radius. When comparing
Figures 4.7b and 4.7d, it can be seen that Conf 2 leads to slightly more positional
information of the horizontal positions. However, the positional information of
Conf 2 in the z-direction is worse when compared to Conf 3. The conclusion is to
choose Conf 3 as anchor configuration.
As stated previously, the crlb-means seem to grow quadratically with radius
r. To study Conf 3 further, we define the azimuth angle and inclination angle in
the spherical coordinate system as α ∈ [−π, π] and β ∈ [0, π], respectively. The
radius is fixed to r = 10 m, and for each point, α and β are calculated. From the
calculations as well as the crlb of each point, the relation between α and crlb
as well as β and crlb is shown in Figures 4.8 and 4.9.
When analysing Figure 4.8, we first notice the spread of points for each value
of α, caused by the varying β-angle. The opposite behaviour can be seen in
Figure 4.9. However, there is still a clear pattern. When the α points in the x-
direction, more information about the x-position is gained. This is the same for
the y-position. A similar behaviour can also be seen in Figure 4.9. When β points
more in the z-direction, the positional information of the z-position is better. Fur-
thermore, we conclude that α has no effect on the position information from the
z-position.
CRLB x-position
CRLB [m 2 ]
0.05
0
-4 -2 0 2 4
[rad]
CRLB y-position
CRLB [m 2 ]
0.05
0
-4 -2 0 2 4
[rad]
CRLB z-position
CRLB [m 2 ]
0.5
0
-4 -2 0 2 4
[rad]
Figure 4.8: crlb over azimuth angle α. Calculated from 10000 points ran-
domly placed on a semi-sphere with a radius r = 10 m.
4.5 Camera System 35
CRLB x-position
CRLB [m 2 ]
0.05
0
0 0.5 1 1.5
[rad]
CRLB y-position
CRLB [m 2 ]
0.05
0
0 0.5 1 1.5
[rad]
CRLB z-position
CRLB [m 2 ]
0.5
0
0 0.5 1 1.5
[rad]
Figure 4.9: crlb over inclination angle β. Calculated from 10000 points
randomly placed on a semi-sphere with a radius r = 10 m.
See Figure 4.10 for an overview. These four components are each described in the
following four sections.
Camera System
Fiducial
Marker
Marker Relative
EKF
Images Detection Position Coordinate Position
Camera
Algorithm Transform
4.5.1 Camera
As described in Section 2.1, a camera is mounted on the uav. The camera is
modelled as a pinhole camera, see Section 3.6. The parameters of the pinhole
model were determined through a calibration procedure.
The calibration procedure is explained in [33]. In essence, the procedure in-
volves holding a checkerboard with known dimensions in front of the camera.
The checkerboard is moved between different poses while the camera is taking
pictures of it. The checkerboard contains 9x7 squares, however, it is the 8x6 in-
terior vertices that are being used by the calibration algorithm. When enough
images of the checkerboard have been taken, the calibration algorithm solves an
optimisation problem in order to obtain an estimate of the camera parameters.
system the fiducial marker is used to estimate the ugv’s position with respect to
the camera mounted on the uav. This estimation is possible since the size of the
fiducial marker is known. The marker is mounted on top of the ugv, see Section
2.2.
AprilTag can detect a variety of tags, known as tag families8 . Variations be-
tween families can come in size, appearance and other properties. A tag family
contains multiple tags, each identified by their individual ID (positive integer).
The tag family used in this project is called TagCustom48h12, which was designed
in [37]. A tag can usually be divided into two parts: One used for the identifica-
tion and used for the position estimation, formally called the estimation area. The
tags from the TagCustom48h12-family also contain a space in the centre which is
not used in neither the identification nor the estimation process, see Figure 4.11.
This enables for the creation of a recursive tag, which herein is defined as:
A recursive tag is a tag with an inner tag placed in the centre. The inner tag
can also hold another tag placed in its centre, increasing the level of
recursion.
The advantage of recursive tags is that tags with different sizes can be com-
bined into a new tag with an extended range of detection compared to the indi-
vidual tags. By range of detection the authors refer to the interval between the
minimum and maximum distance at which a tag can be detected with a camera.
One of the governing quantities for the range of detection is the size of the tag,
since it determines at which distances it can be seen in in the fov of the camera.
When a smaller tag is placed within a larger tag, the combined tag can be seen
between the maximum distance of the larger tag and the minimum distance of
the smaller tag, given that there is no gap between the tags’ range of detection.
The recursive tag on the ugv is a combination of two tags in the TagCus-
tom48h12 family. The tags have IDs 0 and 1, see Figure 4.12a and 4.12b. The
8 A description of the different kinds of AprilTag families can be seen at:
https://april.eecs.umich.edu/software/apriltag.html
38 4 Estimation System
(a) Outer tag with ID 0. (b) Inner tag with ID 1. (c) Recursive tag.
Figure 4.12: Description of the recursive tag consisting of two tags combined
into one. The green area corresponds to the part of the tag not used in the
detection.
dimensions of the tags as well as their estimation areas are presented in Table 4.2.
The tag with ID 1 is placed inside the tag with ID 0 as explained in Figure 4.12.
where pW is the estimated position from the AprilTag algorithm expressed in the
{W}-frame.
of detection. The small tag has a slightly wider area of detection, which is moti-
vated by the fact that parts of the larger tag disappears out of the fov before the
small tag. The range of detection for the small tag was experimentally confirmed
using the camera mounted on the uav. The tag could be detected at distances up
to 8 m.
40 7
35
6
30
5
25
4
Height (m)
Height (m)
20
3
15
2
10
1
5
0 0
-30 -20 -10 0 10 20 30 -4 -2 0 2 4
Horizontal Direction (m) Horizontal Direction (m)
Figure 4.14: Detection range of the small and the large tag. The blue and
red line indicate the areas where the large and small tags can be detected,
respectively. The right figure is a zoomed-in version of the left figure.
40 4 Estimation System
First, the impact of the pitch angle of the camera is investigated. Figure 4.15
displays the bias and standard deviation when the pitch angle is varied for a typ-
ical position. The pitch angle mostly affects the accuracy of the z-estimate, and
the highest error occurs when the camera is parallel with the tag. The maximum
bias in the height estimate is approximately 10 cm for the large tag and 6 cm
when using the small tag. The same applies for the roll angle. In the subsequent
tests the camera is parallel with the tag to obtain a conservative estimate of the
error.
0.16 0.08
x x
y y
0.14 z 0.07 z
0.12
0.06
0.1
0.05
0.08
0.04
Error (m)
Error (m)
0.06
0.03
0.04
0.02
0.02
0.01
0
-0.02 0
-0.04 -0.01
-10 -5 0 5 10 15 20 -20 -10 0 10 20
Pitch Angle Pitch Angle
(a) Large tag, camera at position (0.5, 0, 5) (b) Small tag, camera at position (0.2, 0,
(m). 2) (m).
Figure 4.15: Bias and standard deviation when varying the pitch angle of the
camera relative to the tag. The angles vary between −17.3◦ to 17.3◦ , which
are the maximum allowed attitude, see Section 5.3. For the largest negative
pitch angles the tags were not entirely inside the fov of the camera.
The next test investigates the estimation quality when the distance along the
z-axis varies, and the results can be seen in Figure 4.16 where it is apparent that
the height estimate is affected the most. Figure 4.17 shows the bias and the stan-
dard deviation for the position estimate when the camera’s x-position varies at
4.5 Camera System 41
a constant height. The standard deviation and bias from the y-estimate is not
affected when the camera is moving in the x-direction. The bias and the standard
deviation in the x-direction grows as the relative x-distance increases. It has been
confirmed that the opposite applies when the y-position varies.
The highest altitude the uav flies at during the landing manoeuvre is 10 m.
At 10 m the bias in the z-direction for the large tag is around 20 cm and quickly
decreases as the altitude gets smaller. The bias in the horizontal plane is in the
order of a couple of cm throughout all the test. Comparing these biases to the
precision of the camera sensor placement, which is also in the order of a couple of
cm, it is concluded that they are of similar size. Therefore, the bias is negligible.
0.6 0.3
x x
y y
0.5 z
0.25 z
0.2
0.4
0.15
0.3
0.1
0.2
Error (m)
Error (m)
0.05
0.1
0
0
-0.05
-0.1
-0.1
-0.2 -0.15
-0.3 -0.2
0 5 10 15 20 0 1 2 3 4 5 6
Height (m) Height (m)
Figure 4.16: Bias and standard deviation when varying the height of the
camera. The camera is parallel with the tag and the x and y position is 0.
0.35 0.2
x x
y y
0.3 z 0.15 z
0.1
0.25
0.05
0.2
0
Error (m)
Error (m)
0.15
-0.05
0.1
-0.1
0.05
-0.15
0 -0.2
-0.05 -0.25
0 1 2 3 4 0 0.5 1 1.5 2 2.5
x (m) x (m)
Figure 4.17: Bias and standard deviation when varying the x-position of
the camera at a constant height. The camera is parallel with the tag, the y
position is 0.
Table 4.3: Linear regressors for the variance in the x-,y- and z-direction. The
variable r represent the horizontal offset.
Figure 4.18: Illustration of the variance surface when the x- and z-position
changes. The red stars indicate the prediction from the linear regression
model.
44 4 Estimation System
4.6 Accelerometer
The estimation system uses relative acceleration data as input in each time update
step. In order to obtain relative acceleration data, both the uav and the ugv
are equipped with an imu. The imu contains an accelerometer, which measures
acceleration in the vehicle’s local coordinate frame. The acceleration data is low-
pass filtered and down-sampled from 400 Hz to 50 Hz before being passed to the
ekf. An overview of the relative acceleration computation can be seen in Figure
4.19
Accelerometers
Figure 4.19: Overview of the acceleration data processing. The two imu’s
attached to the vehicles acquired acceleration measurements in 400 Hz. The
prefilter block represent the low-pass filtering and downsampling of the
measurements. After the prefilter block the relative acceleration is com-
puted and passed to the ekf.
Reference Barometer
Figure 4.20: Barometer data from the uav and the ugv is to compute an
estimate of the relative altitude, which in turn is passed to the ekf.
4.7 Reference Barometer 45
where Pa is the pressure from the uav’s barometer, Pb is the pressure from the
ugv’s barometer and T is the air temperature. It is assumed that the converted
altitude is under the influence of awgn. The sensor model is
Figure 4.21 shows the data from the test. The pressure data is used to compute
a height estimate which is presented in Figure 4.22. Table 4.4 presents statistical
data from the relative height estimate. From the data presented in the table, Rbaro
is set to 0.2 m2 .
For simulation purposes it is also interesting to know the variance of the
barometer. The variance of the first 1000 data points from the pressure data
gives a variance of 11.56 and 13.89 for barometer 1 and 2 respectively.
10 5
1.0011
Barometer 1
Barometer 2
1.001
1.0009
Pressure (Pa)
1.0008
1.0007
1.0006
1.0005
0 50 100 150 200 250 300
Time (s)
Figure 4.21: Pressure data from the two barometers. Red data correspond to
the barometer on the ground and blue data correspond the barometer that is
lifted after 100 seconds.
4
Data
Mean
2
Height (m)
-1
-2
0 50 100 150 200 250 300
Time (s)
Figure 4.22: Relative height estimates computed from the pressure data in
figure 4.21. The red lines indicate the mean height for each 100 s time inter-
val.
Control System
5
This chapter describes the subsystem that is responsible for computing appropri-
ate control outputs for the uav to solve the landing task. The subsystem is named
control system.
47
48 5 Control System
To summarise, the control system should drive the relative position between
9 When used in the GUIDED_NOGPS flight mode, however, ArduCopter supports a variety of flight
modes, see https://ardupilot.org/copter/docs/flight-modes.html.
5.2 State Machine 49
the vehicles to zero by controlling the attitude and climbing rate of the uav. The
altitude can be directly controlled through changing the climbing rate. The move-
ments in the horizontal plane are indirectly controlled via the attitude targets.
Without the autopilot the uav would lose upwards thrust when tilting in any di-
rection since the total thrust would be split in the horizontal plane as well. How-
ever, the autopilot handles this since it is regulating the vertical velocity. The
entire control system can therefore be divided into a vertical and a horizontal
controller.
The context of the control problem changes with the distance between the vehi-
cles, which motivates the use of different control laws at different distances. A
state machine has therefore been implemented where each state corresponds to
a unique control law. The overall strategy of the state machine is described as
follows:
• The uav is flying at constant altitude and moves towards the ugv at the
maximum allowed velocity. The altitude is high enough to account for in-
accuracies in the height estimation in order to prevent collision with the
ground. When the uav is approaching the ugv it starts to slow down in
order to prevent an overshoot.
• When the uav is in close proximity of the ugv it starts to match the ugv’s
movements and its horizontal position. This requires a faster response time.
• When the uav closely follows the movements of the ugv it starts descend-
ing while keeping the horizontal positions as close as possible.
The above explained actions are each captured in a unique state. The states
are called: APPROACH, FOLLOW and DESCEND. The idea is that the state machine
starts in the APPROACH state and transitions to the other states in the presented
order as the distance between the vehicles gets shorter. A transition occurs if a
condition is met where the governing transition quantity is the relative distance
between the vehicles in the horizontal plane. A state transition can take place
in both directions. To prevent the state machine from switching back and forth
between two states, a deadband is introduced such that a greater distance is re-
quired in order to transition backwards. An overview of the state machine and its
conditions can be seen in Figure 5.2. The following three subsections describes
and motivates each state.
50 5 Control System
Figure 5.2: The states and state transition conditions of state machine. The
variable dhor represents the relative distance in the horizontal plane between
the vehicles, kF→A and kD→F represent the additional distance required to
transition backwards. The distances are given in meters and kF→A = 0.2 and
kD→F = 1.0.
5.2.1 Approach
The APPROACH state is the initial state of the state machine. In this state the uav
flies at a constant altitude of approximately 10 meters10 while moving towards
the ugv in the horizontal plane. The main goal is to close the distance between
the vehicles as fast and efficiently as possible. Since the distance to the target is
longest in this state the estimation of the relative position difference, based on the
uwb measurements, exhibits the largest errors. Also, the distance implies that
the movements of the ugv does not affect the direction of the relative position
vector as much compared to the other states. These two facts entail that the
control strategy in this state should be to guide the uav in a coarse direction
towards the ugv, i.e. it should neglect fast movement variations while aiming
for a future point of collision.
As stated in Section 3.8 guidance laws are control laws designed to guide a
pursuer such that it reaches a target object. A common application for guidance
laws is therefore in missile systems. However, Gautam et al. [38] investigates the
possibility to use guidance laws for uav landing. They compare the performance
of three classical guidance laws: pure pursuit, line-of-sight and proportional nav-
igation. Gautam et al. concluded that the proportional navigation (pn) law is the
most efficient in terms of time and required acceleration. Borowczyk et al. [8]
test the pn strategy in a real world scenario with positive results. This motivates
the use of a pn-controller in the APPROACH state.
As described in Section 3.8 the pn-control law provides an acceleration which
is perpendicular to the los-vector between the vehicles. Hence, the control law
only steers the vehicle. The law assumes that the vehicle is propelled forward
from another source such as a constant acceleration or an initial velocity, which
is the case for a missile. Therefore, a complementary controller is required. How-
ever, compared to a missile the approaching uav poses additional constraints on
the relative velocity. The uav should slow down before reaching the target in
order to not overshoot. Hence, the complementary controller should allow the
uav to fly with the maximum allowed velocity, saturated attitude angles, until
it is in the proximity of the ugv. At that point, it should start to slow down to
prevent an overshoot. The implementation of a pn controller that accounts for
10 Chosen as a safety precaution in consultation with the staff at foi.
5.3 Implementation 51
5.2.2 Follow
The FOLLOW state is used when the uav is closer than 4 meters from the ugv in
the horizontal plane. The goal of the state is to position the uav directly above the
ugv, i.e. a zero relative difference in the horizontal plane. In order to preserve the
matching horizontal position and to track the motions of the ugv it is important
with a fast response time. In this state the desired altitude is lowered to 5 m.
In [39] it is concluded that a pn-controller generates oscillative control actions
at close distances and therefore become inefficient, which is solved by switching
to a pd-controller. Borowczyk et al. [8] build on this result in their approach.
They also state that a pd-controller is easier to tune in order to achieve fast re-
sponse time and accuracy.
This motivates the use of a pd-controller controller in the FOLLOW state. How-
ever, in this work the pd-controller is extended to a pid-controller to mitigate
static errors due to wind or the ugv moving too fast.
5.2.3 Descend
The DESCEND state is the final state in the state-machine. The underlying con-
troller is the same as in the FOLLOW state, i.e. a pid-controller. The difference is
that the desired altitude is set to zero. The uav will therefore start to descend
further towards the ugv.
5.3 Implementation
This section describes how the control laws in the control system are implemented.
The implementation is based on the theory described in sections 3.7–3.8. As men-
tioned in Section 5.1, the control signals allow the control system to be divided
into a vertical and horizontal controller. This is also viable since the error signal
e = kpd − prel k2 is minimised when each of its components are minimised, which
can be done independently. The control system is therefore split into a vertical
and horizontal controller. The control algorithms used in the control system re-
quire the relative position and velocity of the uav with respect to the ugv. The
state vector x is therefore " #
p
x = rel ∈ R6 , (5.4)
ṗrel
where ṗrel denotes the time derivative of prel . The overall structure of the control
system is presented in Figure 5.3, which is an extension of the Control System
block in Figure 5.1.
The vertical controller takes the vertical position and velocity (z, ż) as well
as the desired height hd as inputs and outputs a climbing rate cr with the goal of
reaching hd . The implementation of the vertical controller is described in Section
5.3.1.
52 5 Control System
Figure 5.3: Overall structure of the implementation of the uav control sys-
tem. Inputs are the state vector x and the desired height hd . The control
system outputs control commands in the form of attitude angles (φ, θ, ψ)
and a climbing rate cr .
The horizontal controller block has the goal of steering the uav towards the
ugv to attain a relative horizontal distance of zero. The inputs of the controller
are x, ẋ, y and ẏ. The controller then outputs the attitude target (φ, θ). As stated
earlier ψ is set to 0 throughout the entire landing task.11 The horizontal con-
troller uses two different control structures: a proportional-integral-derivative
(pid) structure (FOLLOW and DESCEND) and a proportional navigation (pn) struc-
ture (APPROACH). It is the task of the state machine to decide which of the two
control structures to use.
The control signals that are passed to the autopilot software must be within
certain boundaries. The attitude angles φ and θ must be within the interval
[− π2 , π2 ] and the climbing rate cr ∈ [0, 1]. However, further restrictions of the
steering commands are used as safety precautions to make sure that the uav
cannot accelerate too aggressively during flight. The restrictions are
where αmax = 0.3 rad and cr,min = 0.3. For convenience, the saturation function
fsat is defined as
xmax if x ≥ xmax
fsat (x, xmin , xmax ) B x if xmin < x < xmax . (5.6)
xmin if x ≤ xmin
11 The choice of ψ = 0 is for simplicity’s sake and is not a restriction of the system. To fly with an
h iT
arbitrary yaw angle ψ the attitude vector φ θ is simply rotated ψ radian before being sent to
ArduCopter.
5.3 Implementation 53
chosen as ex = −x. With only a proportional term in the controller this error sig-
nal would result in a desired negative acceleration when the relative x-distance
is positive, i.e. the uav accelerates towards the ugv. Vice versa when the relative
x-distance is negative. Thus, the choice of error signal as ex = −x minimises x2 .
The pid controllers return the horizontal accelerations (ax , ay ).
Since the pid controller is implemented in discrete time, the integral in (3.24)
must be approximated. A common approximation is derived using the Euler
method, and gives
Zt k
X
ex (τ)dτ = Ts ex,j , (5.8)
0 j=0
where k is the sample at time t and Ts is the period time of the controller. How-
ever, when this approximation is coupled with control signal saturation it can
lead to a integrator windup [28]. To avoid this phenomenon, the error sum is
saturated and multiplied with a decay factor df < 1. The saturation prevents
the I-part from being too large and the decay factor makes sure that older errors
do not have as much impact as newer ones. The error sum at sample k, Sx,k , is
calculated as
Sx,k = df · fsat (Sx,k−1 + ex,k , −Smax , Smax ) , (5.9)
where Smax is the saturation limit and Sx,0 = 0. The I-part at sample k, Ix,k , is
calculated as
Ix,k = Ki · Ts · Sx,k , (5.10)
ak = K p e p + K d e v , (5.17)
where Kp and Kd are positive tuneable parameters. The parameters are chosen
such that the proportional term dominates when the uav is far away from the
ugv, to make sure that the uav approaches the ugv with the maximum allowed
velocity. When the uav approaches the ugv the proportional error will shrink,
and the relative velocity will start to dominate and make sure that the uav slows
down to prevent an overshoot.
In order to compute a⊥ , (3.25) is used as follows
p p×v
a0⊥ = λ|v| ×Ω, Ω= , (5.18)
|p| p·p
h iT h iT
where p = x1:2T T
0 and v = x4:5 0 . The third component is set to zero
since an acceleration in the horizontal plane is desired. The sign of the equa-
tion has changed since x1:2 = {Pursuer position} − {Target position} and x4:5 =
{Pursuer velocity} − {Target velocity}, which is opposite from the theory. The re-
quested parallel acceleration a⊥ is the first two components of a0⊥ .
From experiments it has been noted that the length of ak is much larger than
a⊥ . Changing the value of the Kp -parameter does not solve this since ak is pro-
portional to the relative distance, which varies a lot during the APPROACH phase.
This becomes an issue when the accelerations are converted to attitude angles
5.4 Control System Evaluations 57
according to (5.14). The saturation performed in (5.14) takes place in the x- and
y-direction. If ak and a⊥ are not aligned with those axes, the much larger ak -
vector would dominate and surpass the saturation limit, such that a⊥ have no
effect. To solve this, the saturation is performed on the individual accelerations
before they are combined. The saturation is implemented as follows
ak
ak , sat =fsat (|ak |, −αmax , αmax ) · (5.19a)
|ak |
a
a⊥ , sat =fsat (|a⊥ |, −αmax , αmax ) · ⊥ (5.19b)
|a⊥ |
which guarantees that each element of axy is within the interval [−αmax , αmax ].
Finally, the attitude angles can be computed using (5.13) as follows
• When the flight controller is initialised, which takes about 10 s, the uav
starts to ascend and the ekf is initialised. Simultaneously, the ugv starts
moving in the horizontal plane with a velocity of 4 m/s according to the
pattern described in Appendix C.3.
• When the uav reaches an altitude of 9 m it starts using the outputs of the
control system in order to approach and land on top of the ugv.
During the entire simulation the uav is under the influence of an external wind
force, see Appendix C.1.
The simulation is repeated 100 times. Figure 5.7 shows the ugv’s movements
during the 100 runs. The shape of the trajectories is determined by the ugv’s
starting position, its heading at the start and how the ugv is manoeuvring during
the simulation. Additionally, the length of the trajectory is determined by the
time it takes for the uav to land, which is influenced by the ugv’s movements
and the direction and amplitude of the wind. Figure 5.8 illustrates the uav’s
flight path together with the movements of the ugv from the previous figure.
58 5 Control System
300
200
100
0
y (m)
-100
-200
-300
-400
-300 -200 -100 0 100 200 300
x (m)
Figure 5.7: Movement trajectories for UGV during 100 simulation runs. The
red crosses indicates the starting positions of the ugv, the black line shows
the trajectories and the blue stars represent the position of the ugv when the
uav has landed on top of it.
5.4 Control System Evaluations 59
Figure 5.8: The ugv’s and the uav’s movement paths during the 100 runs.
The uav’s flight paths are represented with the purple lines.
The uav lands on top of the ugv in each simulation. The landing positions
of the uav on the ugv platform can be seen in Figure 5.9 together with a 95
% confidence interval. Additionally, Figure 5.9 presents 100 landing positions
when the Ki constant is set to zero for both horizontal pid-controllers, i.e. no
I-part during landing phase. Without an I-part the landing positions are shifted
backwards, which makes sense since the ugv is moving in the opposite direction.
Additionally, the landing positions are more spread out when no I-part is being
used. In this idealised case, the landing accuracy is high and 95 % of the landings
occur less than 20 cm from the centre of the landing pad.
60 5 Control System
Figure 5.9: uav landing positions on the ugv, with and without an I-part
in the horizontal pid-controller. The black line indicates the border of the
landing platform on the ugv, the dashed black line illustrates the region
where the uav can land safely in order to have all four legs on the platform
and the black arrow shows the direction the ugv is moving in. The blue
circles show the landing positions when an I-part is used, and the red crosses
show the landing position when no I-part is used. Two 95 % confidence
ellipses are presented alongside the landing positions.
5.4 Control System Evaluations 61
The time distribution for the simulations are studied, in particular the elapsed
time for all of the states, the APPROACH state as well as the FOLLOW and DESCEND
state. The median, min and max time of each scenario is presented in Table
5.1, which is based on the distributions presented in Appendix D.1. The varia-
tions of elapsed time during an entire mission seem to be caused by variations in
the APPROACH phase. Further investigation of these variations conclude that the
main cause is the environmental circumstances of a simulation, i.e. the path the
ugv traverses as well as the direction of the wind.
The fluctuations in elapsed time in the FOLLOW and DESCEND states is signif-
icantly smaller when compared to the APPROACH state. The extremes of these
states were studied further, and it was confirmed that the additional time was
caused by so-called retakes. If the uav is moving too far away from the center of
the ugv during the DESCEND state, it results in a transition back to the FOLLOW
state, or a retake. Retakes were observed in 12 of the 100 simulations. The re-
takes are not caused by estimation errors since the control system uses true posi-
tion and velocity data. Factors causing a retake is instead sudden change of the
ugv’s movements or shift in amplitude of the wind.
Table 5.1: Time distribution from 100 simulations. Median, min and max
of the elapsed time in all of the states, the APPROACH state as well as the
FOLLOW and DESCEND state.
In order to further illustrate how the control system operates, data from one
of the 100 simulations is presented. Figure 5.10 shows the uav’s and the ugv’s
horizontal movements and Figure 5.11 shows the vehicles’ respective vertical po-
sitions during one simulation. In Figure 5.10 it can be seen that the uav is drift-
ing in the wind direction while it is ascending. When the uav reaches an altitude
of 9 m it starts using the outputs from the control system and therefore starts
moving towards the ugv. When the uav transitions from the APPROACH state
to the FOLLOW state the desired altitude changes from 10 m to 5 m, which Fig-
ure 5.11 shows. The uav transitions to the DESCEND state before it reaches an
altitude of 5 m and will therefore continue to descend until it has landed on the
ugv.
To get an understanding of how well the uav is able to track the ugv’s ve-
locity in the horizontal plane the velocities in the x-direction are presented in
Figure 5.12. It can be seen that the vehicles’ velocities are different during the
APPROACH state, but as the uav transition to the FOLLOW and DESCEND states
the uav tries to match the ugv’s velocity. The uav’s velocity exhibits a slightly
oscillative behaviour during the FOLLOW and DESCEND states. An explanation
for this is that the current implementation of the pid-controller only reacts on
the current states and does not predict future state values.
62 5 Control System
70
60
50
40
y (m)
30
20
Figure 5.10: The uav’s and the ugv’s horizontal movements during one sim-
ulation. The blue arrow shows the direction of the wind and the black cross
the position where the uav reaches an altitude of 9 m.
Figure 5.13 shows the attitude angles, roll and pitch, from the control system
during the simulation. When the control system transitions from the APPROACH
to the FOLLOW state, a steep change in control signals is observed. These changes
are caused by the switch between controllers in the state transition. This be-
haviour has been reduced in the control system tuning but is evidently still present.
Furthermore, the control signals show oscillative behaviour in the DESCEND state,
see Figure 5.13. The cause of the oscillations is that the uav has the goal of match-
ing its position and velocity with the ugv’s. With the ugv varying direction reg-
ularly as well as changes in the wind amplitude, the control signal has difficulty
of getting to a steady state.
5.4 Control System Evaluations 63
6
z (m)
2
Ascend Approach Follow Descend
0
5 10 15 20 25 30 35 40
Time (s) ,
Figure 5.11: The uav’s and the ugv’s vertical position during one simula-
tion. The leftmost dotted line indicates when the uav reaches an altitude of
9 m. The other two dotted lines indicate transition in the state machine.
6
UGV's x-velocity
UAV's x-velocity
4
v x (m/s)
5 10 15 20 25 30 35 40
Time (s)
Figure 5.12: The uav and ugv velocity in the x-direction during one simu-
lation.
0.3
Roll angle (rad)
Pitch angle (rad)
0.2
0.1
-0.1
-0.2
Ascend Approach Follow Descend
5 10 15 20 25 30 35 40
Time (s)
Figure 5.13: Attitude output (roll and pitch angles) from the control system
during one simulation.
Landing System Evaluation
6
This chapter presents the results from the evaluation of the landing system. In
Section 6.1 the results from simulated landings using both the estimation and
the control system are presented. An experimental validation of the estimation
system is presented in Section 6.2.
65
66 6 Landing System Evaluation
Figure 6.2: Landing positions of the uav with respect to the {UGV}-frame.
The positions of 100 runs as well as 95 % confidence interval of the landing
positions are shown. The black dashed line represents the footprint margin
of the uav, i.e. the area in which the uav can land safely.
68 6 Landing System Evaluation
The time distribution of the 100 runs can be seen in Table 6.1. The table is
generated from the data presented in Appendix D.1. Compared to Table 5.1 it
can be seen that the median time is slightly larger when using estimated data,
which is expected. The median time in the APPROACH state is similar between
the runs. However, the max time when using true data is about 10 s larger com-
pared to when using estimated data. This indicates that the extreme cases in
the APPROACH state is mostly determined by the environmental circumstances
rather than the estimation quality. On the contrary, the extreme cases in the
FOLLOW and the DESCEND states suggest that the governing factor is the estima-
tion quality. The median time is not substantially increased; however, the max
time is approximately 50 % larger using estimated data. The uav exhibits 21 re-
takes when using estimated data compared to the 12 retakes with true data. The
additional retakes together with the larger max time indicate a more uncertain
behaviour during the FOLLOW and DESCEND when estimated data is being used.
Table 6.1: Time distribution from 100 simulations. Median, min and max
of the elapsed time in all of the states, the APPROACH state as well as the
FOLLOW and DESCEND state.
The performance of the estimation system is measured using the root mean
square error (rmse) of the estimated states. The estimated states are {x, y, z, vx ,
vy , vz }. Since the direction of the x- and y-axis are determined by the underlying
reference system, which is decoupled from the vehicles orientation, a horizontal
distance and velocitypis used instead in the evaluations. The horizontal distance
r is defined as r = x2 + y 2 and the horizontal velocity vr is defined as vr =
q
vx2 + vy2 .
During a simulation, the relative distance between the vehicles varies. In the
APPROACH state the relative horizontal distance ranges from 4 to 100 m while
in the FOLLOW state and the DESCEND state the horizontal distance is below 4
m. The relative distance affects the estimation quality and the rmse is therefore
calculated separately for the APPROACH state and landing phase.
Table 6.2 and Table 6.3 presents the mean, min and max rmse for the 100 runs
for position and velocity estimates respectively. Additionally, the tables present
data from simulations where the camera is excluded. The distributions of the
rmse in simulation is presented further in Appendix D.2.
First, it can be seen that without the camera, the rmse for the horizontal
and vertical position estimate is slightly lower in the APPROACH state but higher
in FOLLOW and DESCEND states compared to when the camera is used. In the
APPROACH state the uav flies at the maximum allowed altitude, 10 m, and in the
FOLLOW and DESCEND states the uav immediately starts to descend to an alti-
6.1 Landing System Simulation 69
tude of 5 m and 0 m respectively. From Section 4.5.5 it is stated that the bias in
the camera system increases with the altitude and that the bias is not accounted
for in the estimate. This suggests that the bias in the camera system’s estimate
will have most impact in the APPROACH state and decrease in the FOLLOW and
DESCEND states, which the results reflect. Furthermore, the camera system only
provides position estimates in the very end of the APPROACH phase when the ve-
hicles are close to each other. The velocity estimate is not affected by bias in the
position estimate, which also can be seen in Table 6.3.
Secondly, the horizontal position and velocity have a much larger mean rmse
compared to the vertical quantities in the APPROACH state. The explanation is
that the system mainly uses measurements from the uwb sensor network when
estimating the horizontal quantities, while the reference barometer is used for the
vertical estimates. From Section 4.4.3 it is concluded that the uwb sensor network
exhibits a larger position error at longer distances. Comparing the results from
the APPROACH state with the results from the FOLLOW- and the DESCEND states
it can be seen that the estimation errors are significantly lower for the latter. The
accuracy of the uwb system improves during the FOLLOW- and DESCEND-phase
due to a more favourable geometry. Furthermore, the camera-based positioning
system can also provide measurements in these phases.
Table 6.2: Mean, min and max rmse for horizontal and vertical position
during 100 simulations. The rmse is divided into the APPROACH state and
the FOLLOW and DESCEND states. Additionally, rmse without the camera
system is presented.
Table 6.3: Mean, min and max rmse for horizontal and vertical velocity
during 100 simulations. The rmse is divided into the APPROACH state and
the FOLLOW and DESCEND states. Additionally, rmse without the camera
system is presented.
To illustrate the impact of the camera system during the landing phase, the
landing positions of the uav without a camera are presented in Figure 6.3. The
70 6 Landing System Evaluation
landing positions are more spread out compared to Figure 6.2. Also, a few of the
landing positions are outside the safety margins. This does not necessarily mean
that one of the uav’s legs is outside the platform, but there is a risk. Additionally,
the uav performs retakes in 65 of the 100 simulations. This indicates that the
landing system is less robust during landing when omitting measurements from
the camera system.
Figure 6.3: Landing positions of the uav with respect to the {UGV}-frame
without the camera system. The positions of 100 runs as well as 95 % con-
fidence interval of the landing position are shown. The black dashed line
represents the footprint margin of the uav, i.e. the area in which the uav
can land safely.
6.2 Estimation System Validation 71
Figure 6.4: The uav and ugv during the Visionen test.
The recorded data from the manoeuvres, as well as the artificial sensor mea-
surements from the ugv, are post-processed in the estimation system. The post-
processing is performed using two sensor setups: all sensors used in the esti-
mation system and all sensors except the camera system used in the estimation
system. The second sensor setup is motivated by the goal of finding out if a land-
ing can be achieved covertly and independent of the time of day. Such a landing
could not rely on the camera system in its current state since it requires visibility
of the tag which is not possible at night. Solutions such as lighting up the landing
pad would not meet the covert landing criteria. From both setups, the positions
and velocities in each manoeuvre are estimated. The uav trajectory (estimated
as well as ground truth) from the second flight manoeuvre is presented in Figure
6.5 to illustrate the performance of the estimation.
To further investigate the estimation accuracy for both sensor setups, the
rmse, maximum absolute error and a 95th percentile of the absolute error for the
position and velocity estimate are determined, see Table 6.4 and Table 6.5. The
results show that the rmse for the horizontal position estimate is almost doubled
when the camera system is excluded. The results are also compared with the re-
6.2 Estimation System Validation 73
Figure 6.5: uav trajectory (ground truth and estimation) from the second
flight manoeuvre in the Visionen test. The landing pad of the ugv with four
beams pointing upwards is shown as well.
sults in Table 6.2 and Table 6.3 for the FOLLOW and DESCEND states. The individ-
ual rmse from the experimental tests are less than or equal to the corresponding
maximum rmse in simulation. This suggests that there exist simulations where
the estimation quality is similar to the experimental tests.
74 6 Landing System Evaluation
The maximum absolute error in Table 6.4 and 6.5 is around 2 to 3 times larger
than the rmse. The greatest discrepancy between rmse and the maximum is seen
in landing manoeuvre 1 and 3. In these manoeuvres, the large error was caused
by the camera losing sight of the tag. In the case of flight manoeuvre 2, the cam-
era has sight of the tag during the entirety of the manoeuvre, see Figure 6.6. This
explains the significantly lower maximum error when the camera is used. Com-
paring the estimation error without the camera system, the maximum absolute
error is similar during all three manoeuvres. The maximum error without the
camera system is also similar to the maximum error during flight 1 and 3, which
confirms that the large error is caused by the camera system losing track of the
tag.
The estimation without the camera system was investigated further. It was
concluded that the large errors can occur during the entire manoeuvre. Hence,
with only a uwb sensor network and an imu for estimation, a relatively high
horizontal estimation error can appear at close distances to the ugv (∼1-2 m).
Table 6.4: rmse, maximum absolute error as well as the 95th percentile of
the absolute error (in m) for horizontal and vertical position for three differ-
ent landing manoeuvres during the Visionen test. Results are shown with
and without the camera system.
Table 6.5: rmse, maximum absolute error as well as the 95th percentile of
the absolute error (in m/s) for horizontal and vertical velocity for three dif-
ferent landing manoeuvres during the Visionen test. Results are shown with
and without the camera system.
The sensor measurements are also analysed to get further knowledge about
6.2 Estimation System Validation 75
the estimation system. When studying the accelerometer data from the test, un-
expected behaviour was found. From the time of engine start, the raw data was
very noisy, which continued throughout the test. When the uav landed and the
engines were cut-off, an oscillation transient could be seen in the data as well. It
was suspected that the oscillations are caused by resonance in the sensor platform,
caused by the vibrations from the engines. To reduce resonance in future tests,
the mounting of the platform could be revised. Another unwanted behaviour
from the accelerometer is that the data contained outliers. With the current im-
plementation, these outliers could not be detected by the nis-test as described in
Section 4.2.
To illustrate the performance of the camera system, its estimated position
during landing manoeuvre 2 is studied, see Figure 6.6. The camera system mostly
uses the small tag for the estimation, since the uav is too close to get the entire
large tag in the fov of its camera. Camera data is acquired during the entire
landing manoeuvre.
To study the performance of the uwb sensor network, the ground truth dis-
tance was calculated for each anchor-tag pair based on the anchor position as well
as the position of the uav and ugv from Qualisys. Figure 6.7 presents the perfor-
mance of anchor-tag pairs 1 and 4 during the complete flight: pair 1 follows the
ground truth nicely, while pair 4 exhibit some serious outliers. This motivates
the use of an outlier rejection algorithm. It has been confirmed that the nis-test,
see Section 4.2, manage to reject this kind of outliers. To study the performance
of all anchor-tag pairs further, the rmse, maximum absolute error as well as the
95th percentile of the absolute error is calculated, see Table 6.6. The measure-
ments from anchor-tag pair 3 had outliers of similar size as with anchor-tag pair
4. Despite these outliers, 95 % of the measurements from all anchor-tag pairs
have errors below 30 cm. These errors are both caused by the performance of the
uwb radios as well as inaccuracy when measuring the sensor placement.
76 6 Landing System Evaluation
Figure 6.6: Position estimation from the camera system in the second flight
manoeuvre from the Visionen test. The landing pad of the ugv with four
beams pointing upwards is shown as well.
6.2 Estimation System Validation 77
6
Measured
Ground Truth
5
Distance (m)
1
0 50 100 150 200 250 300
Time (s)
15
Measured
Ground Truth
10
Distance (m)
0
0 50 100 150 200 250 300
Time (s)
Figure 6.7: uwb sensor performance for anchor-tag pairs 1 and 4 (top and
bottom). Measured distance is compared to ground truth.
Table 6.6: rmse, maximum absolute error as well as the 95th percentile
of the absolute error (in m) of the range measurements from all anchor-tag
pairs.
Anchor-tag
RMSE Max 95th
pair
1 0.15 0.79 0.24
2 0.13 0.62 0.26
3 0.24 7.26 0.29
4 0.63 8.48 0.23
Conclusion and Future Work
7
This chapter presents the conclusions of this thesis, with regards to the problem
formulation presented in the introduction, together with suggested future work.
7.1 Conclusions
A combined estimation and control system capable of landing a uav on a mobile
ugv has been presented. The combined system has been tested and evaluated in a
realistic simulation environment, developed with the aid of Gazebo, ArduCopter
SITL and ros. The individual sensors used in the system have been evaluated sep-
arately. Finally, the estimation system has been validated through experiments.
Initially, the performance of the uwb radios were evaluated with physical sen-
sors. Based on the evaluation, three different placements of the uwb anchors
were analysed with crlb. The anchor placements were decided based on the
results. The accuracy of the camera system was investigated with a simulated
camera, to get an understanding of the bias and variance of its position estimate.
From these evaluations, as well as small-scale experiments indicating the accu-
racy of the accelerometers and barometers, the estimation system was designed.
The implementation of the sensors in the simulation environment were based on
these results.
The control system proposed in this work utilises two control structures: a
pn controller is used to approach the moving ugv and a pid controller is used
to land on top of it. This structure is based on [8], however, the implementa-
tion of the pn controller is modified and a pid controller is used instead of a pd
controller. The control system has been evaluated in simulation and has shown
positive results. The system managed to safely land the uav during all the 100
simulations. The feasibility of the control system cannot be confirmed before a
test is conducted with physical hardware. However, since a sitl implementation
79
80 7 Conclusion and Future Work
of the flight controller has been used during simulation, the control system will
most likely perform similarly in such an experiment. Despite the successful land-
ings, it was noted that the control system showed a slight oscillative behaviour
during the landing phase, which results in a less robust system. One probable
factor to this behaviour is that the control system does not account for future
movements of the ugv.
The estimation system has been evaluated during simulated landings as well.
Combined with the control system, landings were achieved in 100 of 100 at-
tempts, which confirms a fully functional landing system. However, when com-
pared to the landing results using true data, the landing positions on the ugv are
more spread out and the uav is more prone to restart its attempts. A uav land-
ing at night is also of interest, therefore landings have been simulated without
the use of the camera system as well. The results show a much more uncertain
behaviour during the landing phase, behaviours that would be unwelcome in a
physical landing.
To validate the results from simulation, the estimation was evaluated exper-
imentally using the hardware platforms. The experiment was conducted in an
environment similar to the landing phase, except with a stationary ugv. The
estimation accuracy during the experiment was comparable to the accuracy in
simulation. This indicates the feasibility of the estimation system with real hard-
ware during the landing phase. The estimation system was also experimentally
evaluated without the use of the camera system. Results show that the accuracy
is significantly lower when the camera system is not used, the estimation error
is doubled to tripled in size. When considering the behaviour in simulation pre-
sented previously as well, we conclude that a physical landing without a camera
is improbable.
icantly worse without a camera and a substitute is most likely needed. Further
research should therefore investigate if an IR-camera can be used instead.
Appendix
A
Hardware Drivers
This appendix describes the hardware drivers used for communication between
the sensors and the Jetson Nano.
85
86 A Hardware Drivers
B.1 Attitude
The standard imu-sensor in Gazebo is used to obtain attitude data. The data
is noise-free. In order to get a more realistic simulation, noise is added to the
attitude data before it is used. The noise is applied as an additional rotational
matrix, which is called Rnoise . The Rnoise matrix is a rotational matrix generated
from randomly selected Euler angles. The Euler angles: roll, pitch and yaw are
drawn from a Gaussian distribution with zero mean and 1◦ variance. On top
of the randomly drawn angles a bias is added. The bias is randomly chosen as
[0.7◦ , −0.5◦ , 0.6◦ ]16 , where each element correspond to an Euler angle. The atti-
tude data is used to compute a rotation matrix W Rb , which then is contaminated
with noise as follows
W
Rb,noise = Rnoise W Rb , (B.1)
where W Rb,noise is the rotation matrix used in simulation.
87
88 B Simulated Sensors
the uav as well. For each anchor-tag pair, the measured distance is calculated as
presented in (4.15), with the noise variance determined in Section 4.4.2.
However, as described in Table 4.1 in Section 4.4.2, the parameters a and b
can vary between sensors. Therefore, to get a more realistic measurement a and
b are modelled as random variables with a uniform distribution U . The scaling
factor a is bounded to the interval [1.0028, 1.0036] and the bias b is bounded to
the interval [0.01, 0.1], i.e.
a ∼ U (1.0028, 1.0036), b ∼ U (0.01, 0.1) (B.2)
The parameters a and b are generated for each anchor-tag pair at the start of a
simulation.
B.3 Camera
Gazebo is capable of simulating a camera sensor17 . The sensor is attached to a
model in Gazebo and outputs a simulated camera stream based on the pose of
the model. The camera parameters calculated in Section 4.5.1 are used with the
simulated camera to allow a more realistic simulation. Pixelwise awgn is also
added to the simulated image.
B.4 Accelerometer
Gazebo has an implementation of an imu-sensor18 . The sensor provides the lin-
ear acceleration of the body it is attached to and the data is represented in a body-
fixed coordinate system. The simulated sensor supports modelling of awgn.
This appendix describes the Gazebo plugin developed for the simulation environ-
ment.
89
90 C Gazebo Plugins
Force x (N) 1
0.8
0.6
0.4
0 5 10 15 20 25 30 35 40 45
time (s)
-0.2
Force y (N)
-0.4
-0.6
0 5 10 15 20 25 30 35 40 45
time (s)
Figure C.1: Example of the horizontal wind force during a simulation run.
220
200
180
160
140
y position (m)
120
100
80
60
40
20
40 45 50 55 60 65 70 75 80 85
x position (m)
Figure C.2: Example of the ugv’s trajectory when using the Random Trajec-
tory plugin.
Additional Results
D
This appendix describes additional results from Section 5.4 and 6.1.
30
30 45
40
25
25
35
20 30
20
25
15 15
20
10 10 15
10
5 5
5
0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
Time (s) Time (s) Time (s)
Figure D.1: Time distributions for the 100 simulation runs using true states.
The first subfigure (a) shows the total time distribution of the landing
manuever which is during the APPROACH-, FOLLOW- and DESCEND-state.
The second subfigure (b) shows the distribution for the time spent in the
APPROACH-state. The third (c) subfigure show the time distribution for the
landing phase (FOLLOW- and DESCEND-state).
91
92 D Additional Results
30 25
70
25 60
20
20 50
15
40
15
10 30
10
20
5
5 10
0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
Time (s) Time (s) Time (s)
Figure D.2: Time distributions for the 100 simulation runs using estimated
states states. The first subfigure (a) shows the total time distribution of the
landing manuever which is during the APPROACH-, FOLLOW- and DESCEND-
state. The second subfigure (b) shows the distribution for the time spent in
the APPROACH-state. The third (c) subfigure show the time distribution for
the landing phase (FOLLOW- and DESCEND-state).
D.2 Root Mean Square Error 93
30 30
25 25
20 20
15 15
10 10
5 5
0 0
1 2 3 4 5 6 7 0 0.05 0.1 0.15 0.2 0.25
RMSE horizontal position (m) RMSE vertical position (m)
30
25
25
20
20
15
15
10
10
5
5
0
0 0 0.05 0.1 0.15 0.2 0.25
0 0.5 1 1.5
RMSE vertical velocity (m/s)
RMSE horizontal velocity (m/s)
Figure D.3: Distribution of root mean square error (rmse) for both horizon-
tal and vertical position and velocity, during the Approach-state over 100
runs using the camera system.
94 D Additional Results
25 35
30
20
25
15
20
15
10
10
5
5
0 0
0 0.05 0.1 0.15 0.2 0 0.02 0.04 0.06 0.08 0.1
RMSE horizontal position (m) RMSE vertical position (m)
40 40
35 35
30 30
25 25
20 20
15 15
10 10
5 5
0 0
-0.1 0 0.1 0.2 0.3 0.4 0.5 -0.05 0 0.05 0.1 0.15 0.2 0.25
RMSE horizontal velocity (m/s) RMSE vertical velocity (m/s)
Figure D.4: Distribution of root mean square error (rmse) for both hori-
zontal and vertical position and velocity, during the Landing phase over 100
runs using the camera system.
D.2 Root Mean Square Error 95
20 35
18
30
16
25
14
12
20
10
15
8
6
10
4
5
2
0 0
1 2 3 4 5 6 7 8 -0.05 0 0.05 0.1 0.15 0.2 0.25
RMSE horizontal position (m) RMSE vertical position (m)
35
18
16
30
14
25
12
20
10
8
15
6
10
4
5
2
0
0 -0.05 0 0.05 0.1 0.15 0.2 0.25 0.3
0 0.5 1 1.5 2 2.5
RMSE vertical velocity (m/s)
RMSE horizontal velocity (m/s)
Figure D.5: Distribution of root mean square error (rmse) for both horizon-
tal and vertical position and velocity, during the Approach-state over 100
runs without the camera system.
96 D Additional Results
35 50
45
30
40
25 35
30
20
25
15
20
10 15
10
5
5
0 0
0 0.1 0.2 0.3 0.4 -0.05 0 0.05 0.1 0.15 0.2
RMSE horizontal position (m) RMSE vertical position (m)
30 50
45
25
40
35
20
30
15 25
20
10
15
10
5
0 0
-0.1 0 0.1 0.2 0.3 0.4 0.5 -0.1 0 0.1 0.2 0.3 0.4 0.5
RMSE horizontal velocity (m/s) RMSE vertical velocity (m/s)
Figure D.6: Distribution of root mean square error (rmse) for both hori-
zontal and vertical position and velocity, during the Landing phase over 100
runs without the camera system.
Bibliography
[1] Nikolas Giakoumidis, Jin U. Bak, Javier V. Gómez, Arber Llenga, and Niko-
laos Mavridis. Pilot-scale development of a UAV-UGV hybrid with air-based
UGV path planning. Proceedings - 10th International Conference on Fron-
tiers of Information Technology, FIT 2012, (December):204–208, 2012. doi:
10.1109/FIT.2012.43.
[2] Jouni Rantakokko, Erik Axell, Niklas Stenberg, Jonas Nygårds, and Joakim
Rydell. Tekniker för navigering i urbana och störda GNSS-miljöer. Technical
Report FOI-R–4907–SE, Ledningssystem, Totalförsvarets Forskningsinstitut
(FOI), December 2019.
[4] Sinan Gezici, Zhi Tian, Georgios B. Giannakis, Hisashi Kobayashi, Andreas F.
Molisch, H. Vincent Poor, and Zafer Sahinoglu. Localization via ultra-
wideband radios: A look at positioning aspects of future sensor networks.
IEEE Signal Processing Magazine, 22(4):70–84, 2005. ISSN 10535888. doi:
10.1109/MSP.2005.1458289.
[5] Fabrizio Lazzari, Alice Buffi, Paolo Nepa, and Sandro Lazzari. Numerical in-
vestigation of an UWB localization technique for unmanned aerial vehicles
in outdoor scenarios. IEEE Sensors Journal, 17(9):2896–2903, 2017.
[7] Linnea Persson. Autonomous and Cooperative Landings Using Model Pre-
dictive Control. Licentiate thesis, KTH Royal Institute of Technology, Stock-
holm, 2019.
[8] Alexandre Borowczyk, Duc Tien Nguyen, André Phu Van Nguyen,
Dang Quang Nguyen, David Saussié, and Jerome Le Ny. Autonomous land-
97
98 Bibliography
[19] Robert Mahony, Vijay Kumar, and Peter Corke. Multirotor aerial vehicles:
Modeling, estimation, and control of quadrotor. IEEE Robotics and Au-
tomation Magazine, 19(3):20–32, 2012. ISSN 10709932. doi: 10.1109/MRA.
2012.2206474.
[20] Rudolph Emil Kalman. A new approach to linear filtering and prediction
problems. ASME Journal of Basic Engineering, 1960.
[23] Yoichi Morales, Eijiro Takeuchi, and Takashi Tsubouchi. Vehicle localization
in outdoor woodland environments with sensor fault detection. Proceedings
- IEEE International Conference on Robotics and Automation, pages 449–
454, 2008. ISSN 10504729. doi: 10.1109/ROBOT.2008.4543248.
[25] Amir Beck, Petre Stoica, and Jian Li. Exact and approximate solutions of
source localization problems. IEEE Transactions on Signal Processing, 56
(5):1770–1778, 2008. ISSN 1053587X. doi: 10.1109/TSP.2007.909342.
[28] Martin Enqvist, Torkel Glad, Svante Gunnarsson, Peter Lindskog, Lennart
Ljung, Johan Löfberg, Tomas McKelvey, Anders Stenman, and Jan-Erik
Strömberg. Industriell reglerteknik kurskompendium. Linköpings Univer-
sitet, Linköping, 2014.
[29] Elsevier Science & Technology. Missile Guidance and Pursuit : Kinemat-
ics, Dynamics and Control. Elsevier Science & Technology, 1998. ISBN
9781904275374.
[30] Guangwen Liu, Masayuki Iwai, Yoshito Tobe, Dunstan Matekenya, Khan
Muhammad Asif Hossain, Masaki Ito, and Kaoru Sezaki. Beyond horizontal
location context: Measuring elevation using smartphone’s barometer. Ubi-
Comp 2014 - Adjunct Proceedings of the 2014 ACM International Joint Con-
ference on Pervasive and Ubiquitous Computing, pages 459–468, 2014. doi:
10.1145/2638728.2641670.
100 Bibliography
[36] Danylo Malyuta. Guidance, Navigation, Control and Mission Logic for
Quadrotor Full-cycle Autonomy. Master’s thesis, ETH Zurich, 2018.
[37] Maximilian Krogius, Acshi Haggenmiller, and Edwin Olson. Flexible Lay-
outs for Fiducial Tags. IEEE International Conference on Intelligent Robots
and Systems, pages 1898–1903, 2019. ISSN 21530866. doi: 10.1109/
IROS40897.2019.8967787.
[38] Alvika Gautam, P. B. Sujit, and Srikanth Saripalli. Application of guidance
laws to quadrotor landing. 2015 International Conference on Unmanned
Aircraft Systems, ICUAS 2015, pages 372–379, 2015. doi: 10.1109/ICUAS.
2015.7152312.
[39] Ruoyu Tan and Manish Kumar. Proportional navigation (PN) based track-
ing of ground targets by quadrotor UAVs. ASME 2013 Dynamic Sys-
tems and Control Conference, DSCC 2013, 1(October 2013), 2013. doi:
10.1115/DSCC2013-3887.