(0-2017) Survey On Computer Vision For UAVs CurrentDevelopments and Trends
(0-2017) Survey On Computer Vision For UAVs CurrentDevelopments and Trends
(0-2017) Survey On Computer Vision For UAVs CurrentDevelopments and Trends
DOI 10.1007/s10846-017-0483-z
Received: 28 April 2016 / Accepted: 15 January 2017 / Published online: 27 January 2017
© The Author(s) 2017. This article is published with open access at Springerlink.com
Abstract During last decade the scientific research addressed and future trends in autonomous agent
on Unmanned Aerial Vehicless (UAVs) increased development will be also provided.
spectacularly and led to the design of multiple types
of aerial platforms. The major challenge today is Keywords UAVs · SLAM · Visual servoing ·
the development of autonomously operating aerial Obstacle avoidance · Target tracking
agents capable of completing missions independently
of human interaction. To this extent, visual sens-
ing techniques have been integrated in the control 1 Introduction
pipeline of the UAVs in order to enhance their nav-
igation and guidance skills. The aim of this article Unmanned Aerial Vehicles have become a major field
is to present a comprehensive literature review on of research in recent years. Nowadays, more and more
vision based applications for UAVs focusing mainly UAVs are recruited for civilian applications in terms
on current developments and trends. These applica- of surveillance and infrastructure inspection, thanks
tions are sorted in different categories according to to their mechanical simplicity, which makes them
the research topics among various research groups. quite powerful and agile. In general, aerial vehicles
More specifically vision based position-attitude con- are distinguished for their ability to fly at various
trol, pose estimation and mapping, obstacle detection speeds, to stabilize their position, to hover over a tar-
as well as target tracking are the identified compo- get and to perform manoeuvres in close proximity
nents towards autonomous agents. Aerial platforms to obstacles, while fixed or loitering over a point of
could reach greater level of autonomy by integrating interest, and performing flight indoors or outdoors.
all these technologies onboard. Additionally, through- These features make them suitable to replace humans
out this article the concept of fusion multiple sensors in operations where human intervention is dangerous,
is highlighted, while an overview on the challenges difficult, expensive or exhaustive.
Unmanned aerial vehicle – Aircraft without a are those that the UAV and the operator are inter-
human pilot onboard. Control is provided by an ested in or aware of [4].
onboard computer, remote control or combination Sensor Fusion – Information processing that deals
of both. with the acquisition, filtering, correlation, compar-
Unmanned aircraft system – An unmanned air- ison, association, and combination/integration of
craft system is an unmanned aircraft and the equip- data and information from sensors to support UAV
ment necessary for the safe and efficient operation objectives of recognition, tracking, situation assess-
of that aircraft. An unmanned aircraft is a com- ment, sensor management, system control, identity
ponent of a UAS. It is defined by statute as an estimation, as well as complete and timely assess-
aircraft that is operated without the possibility of ments of situations and threats and their signifi-
direct human intervention from within or on the cance in the context of mission operation. The pro-
aircraft [1] cesses can involve UAV onboard computing sen-
Micro Aerial Vehicle – A small sized unmanned sors, externally provided sensor information, and
aircraft system running from battery power and human input. The process is characterized by con-
which can be operated and carried by one person. tinuous refinement of its estimates and assessments,
Collision Avoidance – Sensors on the unmanned and by the evaluation of the need for additional
aircraft detect adjacent air users and alert either sources, or modification of the process itself, to
an automated on-board system or the remote pilot achieve improved results [4].
of their presence and the potential need to take Perception: A UAV’s capability to sense and build
avoiding action [2]. an internal model of the environment within which
Fail-Safe – A design feature that ensures the sys- it is operating, and to assign entities, events, and
tem remains safe in the event of a failure and causes situations perceived in the environment to classes.
the system to revert to a state that will not cause a The classification (or recognition) process involves
mishap [3]. comparing what it observed with the system’s a
Autonomous system – Operations of a unmanned priori knowledge [4].
aerial system wherein the unmanned aerial system Mission – The highest-level task assigned to a
receives its mission from either the operator who is UAV [4].
off the unmanned aerial system or another system Waypoint – An intermediate location through
that the unmanned aerial system interacts with and which a UAV must pass, within a given tolerance,
accomplishes that mission with or without human- en route to a given goal location [4].
robot interaction [4].
Autonomy – A unmanned aerial system’s own abil- 1.2 UAV Types
ity of integrated sensing, perceiving, analyzing,
communicating, planning, decision-making, and This massive interest for UAVs has led to the develop-
acting/executing to achieve its goals as assigned ment of various aircraft types in many shapes and sizes
by its human operator(s) through designed Human- to operate in different tasks [5]. Within the scope of
Robot Interface (HRI) or by another system that this article 4 categories of UAVs are referred, namely
the unmanned aerial system communicates with. single rotor helicopters, multi rotor-crafts, fixed wing
UMS’s Autonomy is characterized into levels from planes and hybrid combinations. Each of these plat-
the perspective of Human Independence (HI), the forms have their own advantages and disadvantages
inverse of HRI [4]. that let the operator decide which will best fit the
Environment – The surroundings of a UAV. The application. The 4 types depicted in Fig. 1 (singe
environment can be aerial, ground, or maritime. It rotor: [6], multi-rotor: [7], fixed wing: [8], hybrid: [8])
includes generic and natural features, conditions, are presented briefly below.
or entities such as weather, climate, ocean, terrain,
and vegetation as well as man-made objects such Single rotor – This platform has the main rotor for
as buildings, buoys, and vehicles. It can be static navigation and a tail rotor for controlling the head-
or dynamic, can be further attributed in terms of its ing. Mostly they can vertically take-off and land
complexity, and can be described as friendly/hostile and do not need airflow over the blades to move
J Intell Robot Syst (2017) 87:141–168 143
forward, but the blades themselves generate the Fixed wing – The basic principle of these UAVs
required airflow. Piloted helicopters are popular in consist of a rigid wing with specific airfoil that
aviation but their unmanned versions are not so can fly based on the lift generated by the forward
popular in UAV research community. A single-rotor airspeed (produced by a propeller). The naviga-
helicopter can be operated by a gas motor for even tion control is succeeded through specific control
longer endurance compared multi rotors. The main surfaces in the wings knowns as aileron (pitch),
advantage is that it can carry heavy payloads (e.g elevator (roll) and rudder (yaw). The simple struc-
sensors, manipulators) in either hovering tasks or ture of such vehicles is the greatest advantage
long endurance flights in large areas outdoors. The from the other types. Their aerodynamics assist in
disadvantages of such platforms are their mechan- longer flight ranges and loitering as well as high
ical complexity, danger from their generally large speed motion. Furthermore, they can carry heav-
rotor, and cost. ier payloads compared to multi rotors, while the
Multi rotor – This class of UAVs can be divided drawbacks of these platforms are the need for a run-
in subclasses depending on the number of rotor way to takeoff and landing and the fact that they
blades. The most common are considered quadro- need to move constantly preventing hovering tasks.
tor, hexarotor. Additionally tri-copters or octa- The landing is also crucial for safe recovery of the
copters have been developed. Mostly they can verti- vehicle.
cally take-off and land and do not need airflow over Hybrid – This class is an improved version of fixed
the blades to move forward, but the blades them- wing aircrafts. Hybrid vehicles have the ability to
selves generate the required airflow. Multi rotors hover and vertically takeoff and land. This type is
can be operated both indoors and outdoors and are still under developemnt.
fast and agile platforms that perform demanding
manouevres. They can also hover or move along Overall, rotor crafts are more suitable for applica-
a target in close quarters. The downsides of these tions like infrastructure inspection and maintenance
types are the limited payload capacity and flight due to hover capabilities and their agile maneuvering.
time. Additionally, the mechanical and electrical On the other hand, fixed wing vehicles fit better in
complexity is generally low with the complex parts aerial surveillance and mapping of large areas from
being abstracted away inside the flight controller greater heights. Table 1 provides a brief overview of
and the motors’ electronic speed controllers. advantages and disadvantages of aerial vehicles.
144 J Intell Robot Syst (2017) 87:141–168
1.3 UAV Sensing utilization of an UAV in real life applications. For this
reason, new ways to estimate and track the position
Some areas where the UAVs can be widely exploited and orientation of the UAV were needed. An ideal-
are Search and Rescue, Survey, Security, Monitoring, accurate solution for the calculation of vehicle’s pose
Disaster Management, Crop Management and Com- would be the fusion of data from multiple collabora-
munications missions [9, 10]. In the first steps of the tive sensors [12]. Nevertheless, multiple sensors could
UAV–era, these aircrafts were equipped with extero- be impractical for some types of UAVs like Micro
ceptive and proprioceptive sensors in order to estimate Aerial Vehicle (MAV)s due to the limited payload or
their position and orientation in space. The princi- for some sensors that malfunction in specific envi-
pal sensors used were the Global Positioning System ronments (like GPS in indoor environments). Thus, it
(GPS) for the position and the Inertial Navigation becomes crucial for the utility provided by UAVs to
System (INS), formulated mostly by an three axis establish a more generic approach for pose estimation,
accelerometer and gyroscope. These sensors, how- being able to be applied on any type of aircraft.
ever, have some flaws from their operating principles, Nowadays, the evolution in embedded systems and
which affect the performance of the system. On one the corresponding miniaturization has brought power-
hand, one of the great drawbacks of the GPS, lies ful yet low-cost camera modules and Inertial Measure-
in the doubtful precision, as it depends on the gen- ment Unit (IMU)s that could be mounted on UAVs,
eral number of available satellites [11], whereas on extract useful information on board and feed back the
the other hand low cost INS suffer from integra- necessary data, fused with measurements from inertial
tion drift problems due to propagating bias errors. sensors. Different types of sensors can be employed
Small errors in calculated acceleration and angu- depending on the task. Ultrasonic sensors (Fig. 2a)
lar velocity are consecutively integrated into linear could be directly integrated in obstacle avoidance
and quadratic errors in velocity and position respec- operations, while laser range finders (Fig. 2c) pro-
tively [12]. Therefore, elaborate estimation processes vide range measurements for obstacle detection and
are essential to guarantee stability of the system. mapping of 3D environments. Visual stereo (Fig. 2b)
The aforementioned navigational equipment, ques- or monocular camera (Fig. 2d) systems are able to
tions the reliability and limit the best possible provide depth measurements for obstacle avoidance
J Intell Robot Syst (2017) 87:141–168 145
(c) (d)
and mapping tasks. Additionally, they can be tightly 1.4 Motivation of this Review
coupled with IMUs for visual-inertial ego-motion
estimation and the raw image stream is also required The aim of this article is to provide an overview of the
for infrastructure inspection. Some example modular most important efforts in the field of computer vision
vision systems are depicted in Fig. 2 with a) [13], for UAVs, while presenting a rich bibliography in the
b) [14], c) [15], d) [16]. In this survey studies that field that could support future reading in this emerg-
include camera as primary or secondary sensors are ing area. An additional goal is to gather a collection
enlisted. In this manner the UAV will enhance its of pioneering studies that could act as a road-map for
environmental perception, while increasing it’s overall this broaden research area, towards autonomous aerial
flying and actuating capabilities. The term Computer agents. Since the field of computer vision for UAVs
Vision, defines the generic research area where the is very generic, the depicted work will focus only in
characteristics of the real 3D world are interpreted surveying the areas of: a) flight control or visual servo-
into metric data through the processing of 2D image ing, b) visual localization and mapping, and c) target
planes. The basic applications of Computer Vision tracking and obstacle detection.
include machine inspection, navigation, 3D model It should be highlighted that this article classified
building and surveillance, as well as interaction with the aforementioned categories following the Naviga-
the environment. The accomplishment of these appli- tion - Guidance - Control scheme. The big picture is to
cations requires the execution of several algorithms, provide a significant insight for the entire autonomous
which process 2D images and provide 3D informa- system collecting all the pieces together. The concept
tion. Some of these algorithms perform object recog- of navigation monitors the motion of the UAV from
nition, object tracking, pose estimation, ego-motion one place to another processing sensor data. Through
estimation, optical flow and scene reconstruction [17]. this procedure the UAV can extract essential infor-
Consequently, Computer Vision can have a critical mation for it’s state (kinematics and dynamics - state
contribution in the development of the UAVs and their estimation), build a model of its surroundings (map-
corresponding capabilities. ping and obstacle detection) and even track sequential
146 J Intell Robot Syst (2017) 87:141–168
objects of interest (target tracking) to enhance the control the vehicle’s motion. Generally, different con-
perception capabilities. Thus, by combining localiza- trollers have been proposed to fulfill mission enabled
tion and perception capabilities, the robotic platforms requirements (position, attitude, velocity and accel-
are enabled for Guidance tasks. In the Guidance sys- eration control). In the following sections the major
tem, the platform processes information from percep- works that employ visual sensors for each defined cat-
tion and localization parts to decide its next move egory will be presented, while the Navigation, Guid-
according to specified task. In this category trajec- ance and Control [18] overview scheme is provided in
tory generation and path planning are included for Fig. 3.
motion planning, mission-wise decision making or The rest of this article is structured as follows. In
unknown area exploration. Finally, the realization of Section 2 a complete overview of the most impor-
actions derived from Navigation and Guidance tasks is tant approaches in the field of flight control will be
performed within the Control section. The controller presented. Furthermore, in Section 3 a survey on Per-
manipulates the inputs to provide the desired output ception (visual Simultaneous Localization and Map-
enabling actuators for force and torque production to ping (SLAM), Obstacle detection and target tracking)
Fig. 3 Typical overview (variations can apply) of an depending on the environment, the aerial platform and human
autonomous aerial system including Sensing, Navigation, Guid- operator needs. In this figure the image feature parameter space
ance and Control parts. In general, various combinations of along with partial state estimation for Image Based Visual
these parts are employed to achieve real-world applications, Servoing (IBVS) is also highlighted
J Intell Robot Syst (2017) 87:141–168 147
and state estimation (visual odometry) for unmanned In general, Visual Servoing can be divided into
aerial platforms will be further analyzed. Moreover, three techniques: a) Image Based Visual Servoing
in Section 4, representative research efforts that com- (IBVS), b) Position Based Visual Servoing (PBVS),
bine the aforementioned fields with mission planning and c) Hybrid Visual Servoing (IBVS + PBVS),
tasks towards visual guidance will be listed. Finally, in depending on the type of the available information
Section 5 the conclusions will provided, extended with that the visual system provides to the control law. In
a discussion on specific challenges and future trends. the IBVS method, the 2D image features are used for
the calculation of control values, while in the PBVS
method the 3D pose of a target is utilized [19, 20]. In
2 Flight Control Figs. 4 and 5 the basic structure of the IBVS and the
PBVS UAVs control schemes are presented, while the
In this section different control schemes and algo- rest of this Section provides a brief overview of the
rithms are described that have been proposed through- contributions in this field.
out the years for UAvV position, attitude, velocity con- In [21] an adaptive IBVS scheme to control firstly
trol. Innititally, Visual servoing schemes are described, the 3D translational motion and secondly the yaw
followed by vision based UAV motion control. angle of a quadrotor with a fixed downward looking
camera has been presented. This method is based on
2.1 Visual Servoing image features, in perspective image space, from an
object without any prior information of its model. The
The main idea of Visual Servoing is to regulate the controller followed a backstepping approach and reg-
pose {Cξ,T } (position and orientation) of a robotic ulated the position using error information on roll and
platform relative to a target, using a set of visual fea- pitch angles. In the same way [11] presented an inno-
tures {f } extracted from the sensors. Visual features, vative contribution for controlling the 3D position of
in most of the cases, are considered as points but can a Vertical Takeoff and Landing Vehicle (VTOL) from
also be parametrised in lines or geometrical shapes the 2D projective geometry. More specifically, this
such as ellipses. More specifically, image processing research aimed to develop a UAV capable of hovering
methods are integrated in the control scheme so that over a specified target for inspection tasks, by utilizing
either the 2D features or the 3D pose measurements only image data in the control process. The suggested
along with IMU data {zI MU } are fed back in the closed controller was hybrid and combined the advantages
loop system. of PBVS and IBVS techniques, while a significant
benefit of this hybrid approach is that it can be also This method intended to be used for inspection tasks,
operational with 3D objects with unknown geome- where the UAV is tolerant to small change in its ori-
tries. In the approach presented in [22], similarly, the entation so that it keeps the object inside the camera’s
aim was to control the position and orientation of field of view. The proposed controller was able to inte-
an aerial platform incorporating image features in the grate the homography matrix from the vision system
control loop. Initially, an IBVS control structure has and also to decouple the translation and orientation
been implemented to provide smooth vertical motion dynamics of the UAV. Some previous and compli-
and yaw rotation for the UAV observing ground mentary works in this area have been also presented
landmarks from a fixed down-looking camera. For in [30–32].
the horizontal motion control, a novel approach has The collaboration of two quadrotors for vision-
been employed by the utilization of a virtual spring. based lifting of a specific payload, with unknown posi-
The proposed controller only considered the camera, tion has been presented in [33]. In this approach, the
the propeller models and the mass of the UAV as UAVs were equipped with downward-looking cam-
parameters. eras and utilized the information from the vision sys-
In [23] two different visual servoing approaches tem to attach their docking positions on the target. As
have been proposed for the real time navigation of before, in this case the IBVS method has been utilized
a quadrotor across power lines. The first controller for the visual information and a corresponding sliding
implemented an enhanced with a Linear Quadratic mode controller has been designed and implemented.
Servo technique IBVS method, while on the con- In [34] a UAV that was controlled solely from
trary, the second controller implemented the Partial visual feedback, using the faces of a cuboid as refer-
PBVS method, based on the estimation of the relative ence, has been presented. In this approach, a camera
to power conductors UAV’s partial pose. A simi- was tracking the UAV’s motion and rotation in the
lar research in [24] presented an IBVS approach for 3D space and calculated its pose. Moreover, in [35] a
linear structure tracking during survey missions and UAV that was able to follow accurately a user-defined
automatic landing. In [25], a VTOL platform based trajectory, by only using visual information and with-
navigation system, using the IBVS technique has been out the need of an IMU or a GPS has been presented.
presented. The goal of this research was the control The proposed approach was able to map the error of
of a VTOL UAV to perform close distance manoeu- the image features to the error of the UAV’s pose in
vring and vertical structure inspections in outdoor the Euclidean space, while in the sequel this error
environments based on image features such as lines. was integrated into the closed-loop trajectory tracking
Likewise, [26] presented a novel approach for Skid- feedback controller. This alternative visual servoing
to-Turn manoeuvres for a fixed wing UAV, to inspect strategy was different from the classical PBVS and
locally a linear infrastructure using an IBVS control. IBVS techniques.
This work provided comparison between Skid-to-Turn In [36] a control algorithm for the autonomous
and Bank-to-Turn manoeuvres control performance landing on a moving platform for a VTOL has been
for inspection applications. Moreover, in [27] a con- presented based the utilization of the IBVS tech-
trol method that was able to stabilize an UAV in nique. In this case, the platform was tracked from an
a circular orbit, centered above a ground target, by image based visual servoing method, which also gen-
using only visual and proprioceptive data through an erated a velocity reference as an input to an adaptive
IBVS approach has been presented. In this case, the sliding controller. This adaptive control was able to
fixed wing UAV has been equipped with a gimballed compensate the ground effect during the manoeuvre.
camera. Similarly [28] proposed a visual servoing Furthermore [37] also suggested a vision based con-
control scheme for the stabilization of a quadrotor trolled system for autonomous landing of a small-size
UAV. The presented approach integrated a novel visual fixed wing UAV. During the landing phase the IBVS
error that improved the conditioning of the closed provided to the controller the manoeuvring informa-
loop Jacobian matrix in the neighbourhood of the tion like the pitch and yaw angles so that the UAV fly
desired set point. Another novel approach has been into a visual marker directly, with the marker recogni-
presented in [29], where a control scheme utilized tion to be achieved through colour and moment based
computer vision for UAV hovering above 2D targets. detection methods.
J Intell Robot Syst (2017) 87:141–168 149
and estimated the aircraft’s ego-motion and depth map been developed towards autonomous take off, naviga-
with unknown scale factor. An adaptive observer con- tion and landing. The rotor-craft was equipped with
verted the scaled data into absolute velocity and real a stereo camera and IMU sensors.The measurements
position from the obstacle and finally the proposed of these sensors were merged through a Kalman fil-
controller was able to integrate these measurements ter in order to remove noise and fix the accuracy of
for autonomous navigation. In [43] a UAV perception the UAV state estimation. The camera ego-motion was
system for autonomous UAV landing and position esti- computed by stereo visual odometry technique.
mation has been implemented. The computer vision
algorithm was utilized during the landing process by
sending data to a controller for aligning the UAV with 3 Navigation
the pad. On-board the UAV were also mounted a sonar
and an optic flow sensor for altitude, position and In this section major research in the fields of visual
velocity control. In [44] a novel strategy for close localization and mapping, obstacle detection and tar-
distance to the ground VTOL-UAV manoeuvring like get tracking is presented.
hovering around, landing and approaching a target has
been described. The framework of the time-to-contact 3.1 Visual Localization and Mapping
(tau) theory has been implemented for autonomous
navigation. A monocular camera and an IMU were The scope of localization and mapping for an agent
employed by the developed control law and integrated is the method to localize itself locally, estimate its
their data through a novel visual parameter estima- state and build a 3D model of its surroundings by
tion filtering system. In [45] a quadrotor helicopter employing among others vision sensors [48]. In Fig. 6,
capable of both autonomous hovering and navigation some visual mapping examples are depicted such as:
in unknown environments and object gripping using a) [49], b) [50], c) [51]. In a) dense 3D reconstruction
low cost sensors has been presented. The vehicle sta- from downward looking camera from MAV is demon-
bilization was accomplished by a PD controller while strated, while in b) a complete aerial setup towards
an attitude estimation filter reduced the noise from autonomous exploration is presented. The map shown
the sensor measurements. Navigation was succeeded in Fig. 6 is an occupancy map. The system relies
by incorporating the position and yaw angle esti- on a stereo camera and a downward looking camera
mations of the visual Simultaneous Localization and for visual inertial odometry and mapping. Similarly,
Mapping algorithm into a nonlinear sigmoid based in c) another approach for autonomous exploration is
controller. The aerial gripping was accomplished with described, where the system uses a stereo camera and
a second infrared camera able to estimate the 3D an inertial sensor for the pose estimation and map-
location of an object and send the measurements to ping. The Figure depicts the image raw streams, the
a third controller. In [46] a real-time vision system occupancy map and the dense pointcloud. The rest
for UAV automatic landing has been implemented. of this section briefly provides an overview of the
The helipad was detected using an image registration contributions in this field.
algorithm and the direction of the head of the UAV Towards this direction in [52], a visual pose esti-
was computed with Hough Line Detection and Helen mation system from multiple cameras on-board a
Formula. The UAV camera images were binary trans- UAV, known as Multi-Camera Parallel Tracking and
formed with an adaptive threshold selection method Mapping (PTAM) has been presented. This solution
before they are processed for the landing. Another was based on the monocular PTAM and was able
approach [47] proposed a vision-based algorithm for to integrate concepts from the field of multi-camera
efficient UAV autonomous landing. Firstly CamShift ego-motion estimation. Additionally, in this work a
algorithm was applied to detect the helipad region, fol- novel extrinsic parameter calibration method for non-
lowed by the SURF algorithm in order to calculate overlapping field of view cameras has been proposed.
the position and the velocity of the UAV. Afterwards The combination of a visual graph-SLAM, with
the combination of the SURF results and the IMU a multiplicative EKF for GPS-denied navigation, has
data were inserted through a Kalman filter for the been presented in [53]. A RGB-D camera, an IMU
control of the UAV. In [12] a quadrotor vehicle has and an altimeter sensor have been mounted on-board
J Intell Robot Syst (2017) 87:141–168 151
(a) (b)
(c)
Fig. 6 Different approaches for agent localization inside their surrounding area and simultaneous 3D representation of the area. The
map could be represented in pointcloud form (a,c) or occupancy blocks (b,c) to reduce computation demands
the UAV, while the system consisted of two subsys- Estimation (REMODE) [57]. The depth maps are
tems, one with major priority for the UAV navigation merged to build the elevation map in a robot centric
and another for the mapping, with the first one being approach. Afterwards, the map can be used for path
responsible for tasks like visual odometry, sensor planning tasks. Specifically, experimental trials were
fusion and vehicle control. performed to demonstrate autonomous landing detect-
In [54] a semi-direct monocular visual odome- ing a safe flat area in the elevation map. Additionally,
try algorithm for UAV state estimation has been in [49] a system that integrated SVO odometry in
described. The proposed approach is divided in two an aerial platform used for trajectory following and
subsystems regarding motion estimation and mapping. dense 3D mapping have been presented. The pose esti-
The first thread implements a novel pose estimation mations from visual odometry was fused with IMU
approach consisting of three parts, image alignment measurements to enhance the state estimation used
though minimization of photometric error between by the controllers to stabilize the vehicle and navi-
pixels, 2D feature alignment to refine 2D point coor- gate through the path. It should be highlighted that the
dinates and finally minimization of the reprojection biases of the IMU where estimated online. The esti-
error to refine pose and structure for the camera. In the mated position and orientation were close to ground
second thread a probabilistic depth filter is employed truth values with small deviations.
for each extracted 2D feature to estimate it’s 3D posi- In [58] the optimization of both the Scaling Fac-
tion. As a continuation, the authors in [55] proposed tor and the Membership Function of a Fuzzy Logic
a system for real time 3D reconstruction and land- Controller by Cross-Entropy for effective Fail Safe
ing spot detection. In this work a monocular approach UAV obstacle avoidance has been presented. This con-
uses only an onboard smartphone processor for semi trol method was able to integrate the measurements
direct visual odometry [54], multi sensor fusion [56] from a monocular visual SLAM based strategy, fused
and a modified version of Regularized Modular Depth with inertial measurements, while the inertial SLAM
152 J Intell Robot Syst (2017) 87:141–168
computed the information for the navigation of the measurements of the camera have been coupled with
UAV. Furthermore, in [59] a Rao-Blackwell approach IMU data through an EKF and based on these mea-
has been described for the SLAM problem of a small surements, the velocity and the attitude of the aircraft
UAV. This work proposed a factorization method to have been estimated. Another EKF has been applied
partition the vehicle model into subspaces and a parti- for the localization problem of the UAV as well as the
cle filter method has been incorporated to SLAM. For mapping of the surrounding environment. An inverse
the localization and mapping parts, firstly an EKF has depth parameterization has been implemented to ini-
been applied to the velocity and attitude estimation tialize the 3D position of the features and the usage of
by fusing the on board sensors, then a Particle Fil- the Mahalanobis distance and the SIFT descriptor for
ter estimated the position using landmarks and finally feature matching has enhanced the robustness of this
a parallel EKFs were processing the landmarks for proposed scheme.
the map. The aircraft was equipped with an IMU, a In [66] a robust method for accomplishing multi
barometer and a monocular camera. The UAVs motion UAV cooperative SLAM has been presented. In the
has been estimated by a homography measurement presented approach, every UAV in the swarm was
method and the features were computed by the SIFT equipped with an IMU and a stereo camera system.
algorithm [60], while some highly distinguishable The SLAM algorithm was operated in each UAV and
features have been considered as landmarks. the information was filtered through an H∞ nonlinear
In [61] a cooperative laser and visual SLAM controller. The system accuracy for both the position
approach for an UAV that depends solely on a laser, a of the vehicle and the map cartography were depend-
camera and the inertial sensor has been proposed. The ing on feature re-observation, when a UAV observed
characteristic of the vision subsystem was the correla- features already registered by another UAV.
tion of the detected features with the vehicle state and In [67] a visual SLAM based system for ground
the fact that the detected point database was updated in target locking has been proposed, while at the same
every loop by an EKF. Prior to the update, the image time estimating the UAVs position, despite dubious
features were matched (nearest neighbour [62] and function of the sensor and the 3D model of the target
Mahalanobis threshold [63]) with their corresponding was assumed a priori known. The UAV was equipped
from the database and the new estimations were pro- with a camera and a GPS sensor on board and the
cessed by the filter. The laser subsystem performed SLAM technique implemented a probabilistic filter-
a Monte Carlo pose search, where the vision data ing scheme to extract geometric information from the
have been merged in order to improve point scan and image. The GPS data were fused with the geometric
matching. The combination of these sensors provided information and the projected points of the 3D model
updates to the vehicle state and the overall proposed by the utilization of Kalman and unscented Kalman
scheme resulted in a robust UAV navigation ability in filters, in order the system to estimate the vehicles
GPS denied environments. pose. The visual information from the camera referred
Additionally, in [64] a navigation system that incor- to both the target model and non-target region for the
porated a camera, a gimballed laser scanner and an better accuracy, especially for the case where the GPS
IMU for the UAV pose estimation and mapping have sensor malfunctioned.
been presented. Furthermore, in the same article a In [68] a monocular vision based navigation system
method has been presented for the calibration of the for a VTOL UAV has been proposed, where a modi-
camera and the laser sensors, while a real time naviga- fied Parallel Tracking and Multiple Mapping method
tion algorithm based on the EKF SLAM technique for has been utilized for improving the functionality of the
an octorotor aircraft has been also established. overall system. The proposed algorithm was able to
In [65] a monocular visual SLAM system for an control the UAV position and simultaneously create a
UAV in GPS denied environments has been presented. map. Furthermore, in [69] a particle filter approach for
This approach followed an hierarchical structure from the SLAM method has been presented, where an IMU
the observations of the camera module. The motion and a camera were mounted on-board the RMAX air-
of the vehicle (attitude and velocity) were calculated craft and fused. The particle filter processed the state
using the homography relation of consecutive frames of the helicopter and a Kalman filter was responsi-
from extracted features by the SIFT descriptor. The ble for building the map. The vision data consisted of
J Intell Robot Syst (2017) 87:141–168 153
Points of Interest (PoIs) or features in the image, by new observations in order to have the SIFT descriptor,
the utilization of the Harris corner detector [70]. In the based on a bag-of-words approach, to update the map
presented approach, a linear Gaussian substructure in database.
the vehicle dynamics lowered the dimensions of the A novel mosaic-based simultaneous localization
particle filter and decreased the overall computational using mosaics as environment representations has
load. This approach included an extra factorization of been presented in [79]. In this scheme, successive
the probability density function, when compared to the captured images combining their homography rela-
FastSLAM algorithm [71]. tions were used for estimating the motion of the
Furthermore, in [72] and [73] the implementation UAV. Simultaneously, a mosaic of the stochastic rela-
problem of a bearing only SLAM algorithm for high tions between the images was created to correct the
speed aerial vehicle, combining inertial and visual accumulated error and update the estimations. The
data based on EKF has been presented. application of this novel method results in the creation
In [50] a vision based UAV system for unknown of a network of image relations.
environment mapping and exploration using a front- In [80] a UAV system for the environment explo-
looking stereo camera and a down-looking optic flow ration and mapping, by fusing ultrasonic and camera
camera was presented. This approach aimed to per- sensors was developed. In the presented algorithm, the
form pose estimation, autonomous navigation and 2D marker planar data, extracted from the image and
mapping on-board the vehicle. the depth measurements, from the ultrasonic sensor,
The Smartcopter, a low cost and low weight UAV were merged and computed the UAVs position, while
for autonomous GPS denied indoor flights, using other ultrasonic sensors were detecting the obstacles.
a smart phone for a processing unit was presented In the sequel, this information was further processed
in [74]. This system was capable of mapping, localiza- in order to build a map of the surrounding area. In the
tion and navigation in unknown 2D environments with presented evaluation scenario, it was assumed that the
markers, while a downward looking camera tracked quadrotor was able to move vertically up and down,
natural features on the ground and the UAV was without rotating around its axis. Finally, in [81] a low
performing SLAM. cost quadrotor being capable of visual navigation in
Furthermore [75] proposed a vision based SLAM unstructured environments by using off board process-
algorithm for an UAV navigating in riverine environ- ing has been developed. The main components of this
ments. The suggested algorithm integrated the reflec- work has been a SLAM system, an EKF and a PID
tion in the water and developed a reflection matching controller. This research approach proposed a novel
approach with a robot-centric mapping strategy. The closed-form maximum likelihood estimator to remove
UAV was equipped with multiple sensors (INS, for- the measurement noise and recover the absolute scale
ward facing camera and altimeter) for both navigation of the visual map.
and state estimation processes. In [82] a real time visual - inertial navigation strat-
In [76], a UAV vision based altitude estimation for egy and control of a UAV have been proposed. It has
an UAV was presented. The aircraft’s relative altitude also been presented a novel feature database manage-
to a known ground target was computed by combin- ment algorithm for updating the feature list utilizing
ing the given ground target information (length) and a confidence index. The vision algorithm employed
localization methods. This approach was not strictly Harris corner detector for feature localization and
considering flat ground targets. In [77] a scene change then through the feature correspondence method the
detection method was described based on a vision database was being updated. An EKF integrated the
sensor for creating a sparse topological map. The camera, IMU and sonar measures and estimates the
map contained features of interest from the environ- vehicles state.
ment (key locations), where the algorithm was able In [83] a flight control scheme is developed
to detect and describe them. The key locations were for autonomous navigation. Pose estimation (PTAM)
calculated by an optical flow method using a Canny from visual sensor is fused with IMU data to retrieve
edge detector [78]. The estimated flow vectors were full state of the platform. A non-linear controller reg-
filtered and smoothened to maintain valid informa- ulates position and attitude of the UAV in a innerloop-
tion and afterwards it was decided if the vectors were outerloop structure. A modified SLAM (VSLAM)
154 J Intell Robot Syst (2017) 87:141–168
algorithm is implemented to assist in trajectory track- fused in an EKF for state estimation. Finally, a forma-
ing for the controller. In [84] a multi camera system for tion control is developed to maximize the overlapping
visual odometry is demonstrated. The sensors used are field of view of the vehicles. This work presented
the ultra wide angle fisheye cameras. This work high- experimental evaluation.
lights the advantages of this setup against traditional Visual Simultaneous Localization and Mapping for
pose estimators. In [85] a monocular visual inertial UAVs is still facing various challenges towards a
odometry algorithm has been presented. This work global and efficient solution for large scale and long
uses pixel intensity errors of image patches, known term operations. The fast dynamics of the UAVs pose
as direct approach, instead of traditional point feature new challenges that should be addressed in order to
detection. The identified features are parametrized reach stable autonomous flights. Some of the encoun-
by bearing vector and distance parameter. An EKF tered challenges are shown in Table 4:
is designed for state estimation, where the intensity
errors are used in the update step. In this approach 3.2 Obstacle Detection
a full robocentric representation for full filter state
estimation is followed. Many experimental trials with Obstacle detection and avoidance capabilities of UAVs
micro aerial vehicles have been performed to demon- are essential towards autonomous navigation. This
strate the peroformance of the algorithm. Similarly capability is of paramount importance in classical
in [86], a visual inertial integrated system onboard a mobile robots, however, this is transformed into a huge
UAV for state estimation and control for agile motion necessity in the special case of autonomous aerial
has been developed. The odometry algorithm fuses vehicles in order to implement algorithms that gener-
data from high frame rate monocular estimator, a ate collision free paths, while significantly increasing
stereo camera system and a IMU. Experimental results the UAV’s autonomy, especially in missions where
are provided using a nonlinear trajectory tracking con- there is no line of sight. Figure 7 presents visualized
troller. In [87] a full system for visual inertial state obstacle free paths a) [50],b) [91] c) [92], d) [93]. In
estimation has been developed. This work proposed this figure different obstacle detection and avoidance
novel outlier rejection and monocular pose estima- approaches are presented, where a), b) and c) depict
tion guaranteeing simple computational cost, suitable identified obstacles in 3D and d) in 2D. Additionally,
for online applications.Similarly, in [88] the combina- b) and d) demonstrate the trajectory followed to avoid
tion of visual and inertial sensors for state estimation objects.
has been demonstrated. The core algorithm for sensor In [93] a novel stereo vision-based obstacle avoid-
fusion is an Unscented Kalman Filter acting on the Lie ance technique for MAV tasks was introduced. Two
group SE(3). The authors extended the applicability of stereo camera systems and an IMU were mounted
the UKF state unscertainty and modelling to cases like on the quadrotor. Initially the stereo rigs were tightly
Lie group that do not belong to Euclidean space. hardware synchronized and were designed to build a
In [89] collaborative vision for localization of 3D global obstacle map of the environment, using 3D
MAVs and mapping using IMU and RGBD sensors virtual scans derived from processed range data. The
have been proposed. A monocular visual odometry second part of this approach consisted of a dynamic
algorithm is used for localization tasks. The depth
data are processed to solve the scaling issue from
the monocular odometry. Information from multiple Table 4 Vision based localization and mapping challenges
agents is transmitted to ground station, where in case Challenges
of sufficient overlaps between agent views the maps
are merged in global coordinate frame. The developed Visual localization and • Dependence on illumination
mapping conditions
approach provides both sparse and dense mapping.
In a similar manner, in [90] a fleet of aerial vehi- • High processing time for dense
mapping
cles has been employed to form a collaborative stereo
• Occlusion handling
camera for localization tasks. The sensors used in the
• Sensitive to fast movements
proposed scheme are a monocular camera, an IMU
• Dynamic environments
and a sonar for each agent. Sensor measurements are
J Intell Robot Syst (2017) 87:141–168 155
Fig. 7 Various examples for UAV sense and avoid scenarios either in 3D or in 2D. They are applied either indoors or outdoors in
corridor-like environments or in open spaces with specific obstacles
path planning algorithm called Anytime Dynamic A*, centroid method to estimate the center of the spot.
which recomputed in every step a suboptimal path to Afterwards, the distance from the spot has been com-
the UAVs goal point. This path planner utilized the puted, using geometry with the assistance of a laser
data form the obstacle map and was able to re-plan the sensor. Furthermore, in [96] a vision based obsta-
current path. cle avoidance approach using an optical flow method
In [94] a monocular based feature estimation algo- based on Lucas-Kanade gradient has been proposed,
rithm for terrain mapping was presented, which per- with the general aim to extract image depth. Apart
formed obstacle avoidance for UAVs. The proposed from obstacle localization, this work has also pre-
method utilized an EKF to estimate the location of sented an algorithm for the estimation of the obsta-
image features in the environment, with the major cles’ shape. Similarly, in [97] a novel monocular
advantage to be the fast depth convergence of esti- motion estimation approach and scene reconstruction
mated feature points, which was succeeded by the has been presented. The motion and depth informa-
utilization of inverse depth parameterization. In the tion were recovered by the utilization of a robust
presented approach, the converged points have been optical flow measurement and point correspondence
stored in an altitude map, which has been also used for algorithm from successive images. This approach sug-
performing the obstacle avoidance operation. gested also a visual steering strategy for obstacle
In [95] a monocular visual odometry algorithm, avoidance. The proposed scheme utilized the UAV
enhanced by a laser sensor was presented. The algo- motion information and the 3D scene points for
rithm utilized a template matching approach based the collision free navigation, while the steering was
on grey correlation to detect laser spots and a gray based on the concept that the vehicle will adjust its
156 J Intell Robot Syst (2017) 87:141–168
direction to the furthest obstacle and will not recon- provided the shortest distance paths, whereas in the
struct the geometry of the whole environment like bottom layer, a Model Predictive Controller obtained
SLAM techniques. dynamically feasible trajectories, while overall the
In [98], a monocular method for obstacle detec- obstacles have been assumed to be cuboids.
tion in 3D space by an UAV was proposed. This In [105], a bio inspired visual sensor was presented
strategy made the vehicle capable of generating the for obstacle avoidance and altitude control. The devel-
3D model of the obstacle from a 2D image analysis. oped insect influenced sensor was based on optic flow
The general motivation behind this research was that analysis. This approach proposed a novel specific mir-
the aircraft, at the moment that detected the obsta- ror shaped surface that scaled down the speed of image
cle, would start following a circular path around it. In motion and removed the perspective distortion. In this
every iteration the measured points and the estimated approach, the mirror simplified the optic flow com-
points from the database were processed by the Z-test putation and also provided a 3D representation of
correspondence algorithm, in order to find their cor- the environment. In [106], a technique that combined
respondences. In the sequel, the new measurements optic flow and stereo vision methods, in order to navi-
replaced the previous estimations and so the database gate a UAV through urban canyons was presented. The
was updated. optic flow part of this technique was accomplished
In [99], the necessity for real-time depth calcula- from a pair of sideways cameras that kept in track
tion for a UAV for detecting and avoiding obstacles the vehicle, while the stereo vision information was
using monocular vision was highlighted. This pro- obtained from a forward facing stereo pair and was
posal provided a method to obtain 3D information, used to avoid obstacles.
combining Multi-scale-Oriented Patches (MOPS) and In [107] a visual fuzzy servoing system for obstacle
Scale-Invariant Feature Transform (SIFT). In [100] avoidance in UAVs was presented by the utilization
and in [101] a mission scenario was presented, where of a front looking camera. The control process was
an UAV was capable of firstly exploring an unknown performed based on an off-board computational plat-
local area and afterwards, performing a visual target form and the result has been transmitted to the vehicle
search and tracking, while avoiding obstacles from to correct its route. The obstacle avoidance concept
its own constructed maps. This particular task was was able to firstly track the obstacles and then try
accomplished by fusing measurements from vision to keep it to the right or to the left of the image of
and laser sensors. the vehicle, until a specific yaw angle was reached.
In [102] the precision of a UAV’s classical navi- In the presented approach and for the coloured obsta-
gation system GPS and INS was enhanced with the cle avoidance, the CamShift algorithm [108] has been
utilization of a camera, in order to navigate and detect utilized.
obstacles in the environment. A Kalman filter was uti- In [109], both the hardware and software frame-
lized to estimate the error in the navigation system work for a Hummingbird quadrotor being able to
between the GPS received information and the cam- hover and avoid obstacles autonomously using visual
eras’ measurement. Meanwhile, the epipolar geometry information was presented. The visual information
was applied to the moving camera for the reconstruc- was processed successively for the navigation of the
tion of the environment, while this information has UAV where firstly the Shi-Tomasi descriptor [110] has
been utilized for obstacle detection and avoidance. been applied to find features of interest in the image.
The VISual Threat Awareness (VISTA) system, In the sequel, the Lucas-Kanade optical flow algo-
for passive stereo image based obstacle detection, rithm [111] maintained the features located in con-
for UAVs was presented in [103]. The system uti- secutive frames and integrated these measurements on
lized a block matching for the stereo approach, in a EKF for the ego-motion estimation of the camera
combinations with an image segmentation algorithm and calculated the pose of the vehicle. Furthermore,
based on graph cut for collision detection. In [104], in this article a fast environmental mapping algo-
a controller to plan a collision free path, when navi- rithm based on least square pseudo-intersection has
gating through environment with obstacles, have been been also presented. Finally, this research presented
presented. The proposed controller had a two-layer a fast and effective novel heuristic algorithm for col-
architecture where in the upper layer, a neural network lision free navigation of the UAV, while in [112]
J Intell Robot Syst (2017) 87:141–168 157
an intuitive collision avoidance controller, combining In [116] a low cost UAV for land-mine detection
spherical imaging, properties of conical spirals and has been developed. The vision algorithm performed
visual predictive control has been proposed, being able noise filtering using morphological operators and fea-
to control the navigation of the UAV around the object ture extraction with a template matching method. The
and along a conical spiral trajectory. classification process decided whether the detected
target was object of interest or not.
3.3 Aerial Target Tracking In [117] a fast GPU basedb circular marker detec-
tion process used for UAVs picking ground objects,
In this section object tracking approaches for UAVs’ in “real time”, has been suggested. The Randomized
are highlighted. In short, object tracking can be Hough Transform (RHT) was used to detect circles in
divided into object detection and object following an image frame with low computation time, where the
strategies using image sequences. The visual sensor is RHT was executed in the GPU aiming for increased
used to estimate the relative position and translational detection speed.
velocity between the UAV and the object. Moreover, In [113] an emergency Inertial-Vision navigation
the visual information along with data from other sen- system dealing with GPS-denied environments has
sors is used as an input to the designed controller of the been proposed. Whenever a UAV was losing its GPS
UAV, in order to track the target. The interest for this signal during the flight the designed navigation system
area is augmenting as this technology can be used for performed real-time visual target tracking and relative
airborne surveillance, search and rescue missions or navigation. In this manner, a fixed wing unmanned
even navigation tasks. In Fig. 8 three target following aerial vehicle was able to hover over a ground land-
examples are depicted as it follows: a) [113], b) [114], mark with unknown position. The object tracking task
c) [115]. In this Figure downward looking cameras was fulfilled by a kernel based mean-shift algorithm.
onboard aerial vehicle are used for target detection Thereafter, the visual data were merged with the mea-
and tracking. This approach is applicable in surveil- sured data from the inertial sensor through an EKF
lance tasks, rescue missions and general monitoring for the UAV state estimation. This approach took into
operations. The target is highlighted distinctively in account the delay that the image processing intro-
each frame. The rest of this section briefly provides an duced to the visual measurements for the navigation
overview of the contributions in this field. controller.
(c)
158 J Intell Robot Syst (2017) 87:141–168
Moreover, in [118] a quadrotor visual tracking sys- Regarding the people detection part, the thermal
tem has been suggested. The computer vision part image was processed with various cascaded Haar clas-
implemented a pattern recognition algorithm for the sifiers whilst simultaneously contours were extracted
estimation of the position and the orientation of the from the optical image. In addition, in [124] a mov-
target and was sending this information to the quadro- ing target tracking control method for a UAV has been
tor controller. In the same way [119] presented a mini proposed. It has been based on active vision concept
UAV that was capable to localize and robustly fol- where the image sensor altered its location and orien-
low a target utilizing visual information. The proposed tation in order to obtain visual information from the
method implemented a multi part tracker consisting of object via a servo control scheme. For this purpose
the visual system and the UAV control law. A color two controllers for the UAV flight task have been sug-
based algorithm detected the object which was then gested, either a H2 /H∞ robust controller or a PID/H∞
tracked through particle filtering. In the sequel, the controller. Apart from the flight controller another
controller used the estimation of the relative 2D posi- PID controller performed the tracking task, which was
tion and orientation between the UAV and the target based on disturbance observer so that it compensated
from the visual tracker. Regarding the control of the the introduced disturbance from the UAV movements.
UAV translation a hierarchical scheme with PI and Another research, [125], presented a novel movement
P controllers have been employed, while for the yaw detection algorithm for UAV surveillance, based on
angle a P controller has been designed. Similarly [120] dense optical flow. Additoinally, this research devel-
combined visual attention model and EKF for efficient oped a new method for rejecting outliers in matching
ground object detection and tracking. In this research, process where the movement was determined by local
three visual saliency maps, the local, the global and adaptive threshold strategy.
the rarity saliency map, were computed. These three In [126] an object tracking system for UAV
matrices created the intensity feature map which con- mounted with a catadioptric and moving Pan Tilt
tained the detected object. The visual measurements Zoom camera has been proposed where the adaptive
were used by the Kalman filter to estimate the objects background subtraction algorithm has been used to
state. In [121] a UAV visual ground target tracking detect the moving object. In this case, a novel method
which can be used in GPS-denied environments has utilized data from both cameras and estimated the
been proposed. This system combined the camera and position of the UAV relative to the target.
IMU on-board sensors for the image processing tasks In [114] the development of a low cost and light
and the navigation control of the UAV. Shortly the weight vision processing platform on-board a low-
visual target tracker detected the 2D position of an altitude UAV for real-time object identification and
object in an image and afterwards an optical flow vec- tracking has been suggested. The aerial image was
tor was calculated. Finally an EKF has been designed converted to HSV color space, then, using various
to estimate the position and velocity for both the target threshold values for the different colors the image
and the UAV. The onera ressac helicopter was used as became binary. Afterwards, an edge detection algorithm
testbed. was used and finally some geometrical operations and
Towards aerial surveillance, [122] suggested a con- filters were applied to enhance the result. The object’s
ceptual framework for dynamic detection of moving position was calculated through convolution.
targets (human, vehicles) from a monocular, mov- In [127] the concept for boundary extraction of land
ing UAV. This method combined frame difference fields has been presented. This contribution imple-
with segmentation algorithms into aerial images. Cor- mented two separate methods for UAV following
respondingly, [123] presented an approach utilizing elongated objects. The first hybrid method combined
optical and thermal cameras. Various separate cas- line detection and color texture algorithms was pro-
caded Haar classifiers were applied to the optical cessed by a ground control station. The latter consisted
image, for the vehicle detection part. The detections of a window color based segmentation method which
that match for every classifier were merged to form was able to detect land fields in various lightning
the correct estimation. When the vehicle was detected conditions in real time.
in the optical image, the thermal image tried also to In [128] a novel grid based non linear Bayesian
detect the vehicle and verify the result geometrically. method for target visual tracking from a UAV has been
J Intell Robot Syst (2017) 87:141–168 159
proposed, where the particular approach developed a ground. In addition it implemented ellipse detection,
target motion model used for target movement predic- ellipse tracking (based on CAMShift) and single-
tion. Therefore samples containing candidates for the circle-based position estimation algorithms. The latter
predicted position of the object were generated, then, was used to estimate the relative position of a detected
a radial edge detection algorithm was used for all the circle from its projection on image plane.
available samples to detect edge points around them. In [134] the coordinated vision based target track-
Afterwards the weights of each sample were computed ing from a fleet of fixed wing UAVs has been exam-
by the feature information. The system performed well ined. The main contribution of this work consists of
even in cluttered environments and occlusions. the formulation of control algorithms that coordinate
In [129] a vision based system for street detection the motion of multiple agents for surveillance tasks. In
by a low-altitude flying UAV has been presented. The this case, the heading angular rate is used as an input
street identification was processed by a Bayes clas- to the control scheme, while the motion is regulated
sifier to differentiate between street and background by varying the ground speed of each vehicle. In [135]
pixels. The street classifier updated its parameters a landing system for a aerial platform based on vision
from a recursive Bayesian process. When the street has been suggested. The landing spot visualizes a
was identified an edge detection algorithm computed target with specific shape. The onboard visual sen-
every object inside it and estimated a color profile. sor performs edge detection using line segmentation,
This profile was incorporated to the classifier in order feature point mapping and clustering. Afterwards, fil-
to improve the parameters for street detection. tering is applied to recognize the landing spot target.
In [130] an object tracking and following method The relative pose of the vehicle with the detected
for a UAV has been presented. The two basic com- target is estimated using Kalman Filtering. Finally,
ponents of this approach are an object tracker for the acquired data are used for the position-attitude
the vision part and an Image Based Visual Servoing controller of the aerial platform to perform landing.
controller for the target following part. In [136] a visual algorithm for long term object fol-
In [131] a visual neuro-fuzzy motion control lowing has been proposed. This work is divided in
scheme for a non-linear tracking task and gimbal three parts, the Global Matching and Local Track-
movement has been designed. The camera’s pan and ing, the Local Geometric Filter (LGF), and the Local
tilt motions were controlled by a neuro-fuzzy sys- outlier factor (LOF). GMLT uses FAST feature detection
tem based on Radial Basis Function Network. The for global matching and LK optical flow for local feature
controller estimated the velocity and position com- tracking. LGF and LOF are implemented to remove
mands that were needed in order to actuate the gimbal outliers from global and local feature correspondences
(pan and tilt motion), using measurements from object and provide a reliable detection of the object.
detection algorithm. In this manner the moving object
was always centered in the image frame. It has also
been presented a learning algorithm using gradient 4 Guidance
descent method to train the network.
In [132] UAV object tracking based on feature This section presents a collection studies towards
detection and tracking algorithms has been imple- autonomous exploration for UAVs’ combining meth-
mented. The proposed method has been intended for ods mentioned in previous sections. Elaborate control
real-time UAV control. SIFT algorithm, projective laws employed to adjust the position and attitude
transformation and RANSAC algorithm have been of the vehicle combining information from com-
used for the object detection and tracking. The result puter vision, image processing, path planning or other
of the visual system was used as reference to flight research fields. This topic is broad and contains
controller for the UAV navigation. The COLIBRI many strategies that approach the problem from var-
UAV platform was used is this research. A real time ious aspects. Coordinating sensors with controllers
vision system for autonomous cargo transfer between on UAVs’ can be used as a basis for other sophisti-
two platforms by a UAV has been developed in [133]. cated applications and determine their performance.
The vision system consisted of a camera and was The rest of this section provides a brief overview of
mounted on a pan-tilt mechanism to be parallel with the contributions in this field.
160 J Intell Robot Syst (2017) 87:141–168
In [86] the authors introduced a coupled state esti- presented. Moreover this vehicle was able to take-off
mator for a quadrotor using solely cameras and an and land either on the ground or on a designed charg-
IMU. The architecture of the proposed system used ing platform. These tasks were performed by com-
methods from stereo and monocular vision for pose puter vision landing and navigation algorithms and
estimation and scale recovery, whereas this informa- UAV control scheme, using a camera and an ultrasonic
tion is afterwards fused in an Unscented Kalman filter sensor. The landing algorithm implemented Ellipses
with IMU measurements. The processed estimated tracking while in the navigation algorithm optical flow
states are then distributed for trajectory planning, UAV algorithm was utilized. In [141] a road following sys-
control and mapping. tem for a monocular UAV has been proposed. The
In [92] a sophisticated testbed to examine vision vehicle was equipped with a camera, an IMU and an
based navigation in indoor and outdoor cluttered envi- ultrasonic scanner. Moreover, it was able to measure
ronments has been developed. The vehicle is equipped its position, orientation in relation to the road that had
with stereo camera, an IMU, two processors and an to follow without any prior information. This method
FPGA board. Moreover, the cameras use stereo odom- implemented algorithms to deal with situations where
etry for ego-motion estimation, which is fused in the target road was occluded, switching to inertial sen-
an EKF with IMU measurements for mapping and sors for position data. It has also been developed a
localization purposes. It has been also developed an switching controller to stabilize the lateral position of
obstacle-free path planning routine so that the UAV the vehicle for both the detected and occluded road
is able to move between waypoints in the map. Sim- cases. In [142] a robust vision terrain referenced nav-
ilarly, in [137] an unmanned aircraft system towards igation method for UAV position estimation has been
autonomous navigation based on laser and stereo proposed, combining visual odometry by homogra-
vision odometry has been developed. The vehicle was phy with point-mass filter based navigation algorithm.
designed to operate in search and rescue missions in The data used in the process were obtained from a
unknown indoor or outdoor environments. The system monocular camera, a radio altimeter and a terrain ref-
components consisted of three sections, the percep- erenced elevation map. In the same track in [143] a
tion, the action and the cognition layer. During the technique for UAV pose estimation through template
perception part the visual and laser measurements based registration has been suggested, using a set of
were merged with the IMU data for the UAVs state georeference images. The UAV captured image was
estimation. This layer also performed object detection processed using a similarity function, with a reference
task. The action layer consisted of the flight con- template. This approach utilized Mutual Information
troller which utilized the estimated pose of the vehicle. for similarity function.
Lastly, during the cognition phase path planning for In [144] a combination of a stereo system with a
the autonomous navigation were employed. Addition- IMU for UAV power line inspection tasks has been
ally, in [138] SIFT feature descriptor passed data to the suggested. The aircraft navigated in close proxim-
homography algorithm for motion estimation. Then, ity to the target during the inspection. This proposal
the measurements were fused with inertial information performed UAV pose estimation and environment
by an EKF. It has been also described a delay based mapping, by merging visual odometry with inertial
measurement update method to pass the homography navigation through an EKF. In [145] a vision system
data to the Kalman filter without any state augmen- for UAV autonomous navigation using as reference
tation. Another similar approach [139] also proposed the distance between the vehicle and a wall has been
a vision-aided inertial navigation system for small developed, utilizing a laser and camera perception
UAV based on homography. The data from the IMU, system. The sensors extracted 3D data and provided
the camera, the magnetometer and the altimeter were them to control law for the autonomous navigation.
fused through an EKF using a novel approach and then This approach offered the novelty of alternative sensor
were utilized by the UAV control for hovering and usage and combination in order to trespass the payload
navigation. limitations of the mini scale UAV. In [146] an on-board
In [140] a complete solution towards UAV vision FPGA-based module has been designed with
autonomous navigation with flight endurance has been potential application for real time UAV hovering. The
J Intell Robot Syst (2017) 87:141–168 161
sensor implemented various image processing algo- developed. This approach is based on vision and IMU
rithms like Harris detector, template matching image sensors. Visual inertial odometry is performed for
correction and an EKF to extract all the required local consistency of the platform movement according
information for the stabilization control. It has been to defined task on high level from the operator. Sparse
specifically destined for mini unmanned aircrafts with pose graph optimization and re-localization of land-
limited resources, size and payload. Similarly in [147] marks are implemented to correct the drift in odometry
a system for UAV stabilization over a planar ground estimates. The optimized poses are combined with
target has been presented. This approach tackled the stereo vision data to build a global occupancy map
problem of time delay when data are fused in Kalman that is used also for the global planner to calculate
filter from different sensors. In [148] the receding 3D dynamic paths based on the detected obstacles.
EKF horizon planning algorithm for UAV navigation The experimental trials were performed in unknown
in cluttered environments has been suggested. In this environments with solely onboard processing.
approach, the data from the camera and the IMU were
processed by an Unscented Kalman filter, while the
estimated states from the filter were integrated to the 5 Discussion
receding horizon control and the flight controller. This
research combines the horizon planning with SLAM 5.1 Challenges
for navigation and obstacle avoidance.
In [149] a path planning algorithm for autonomous This article provided an overview of the advances in
exploration in bounded unknown environments has vision based navigation, perception and control for
been presented. The core of this work is based on a unmanned aerial systems, where the major contribu-
receding horizon scheme. The views are sampled as tions in each category were enlisted. It is obvious
nodes at random tree and according to the amount of that integrating visual sensors in the UAV ecosys-
unmapped space the next best viewpoint is selected. tem is a research field that attracts huge resources,
Additionally, visual sensors are employed to provide but still lacks of solid experimental evaluation. For
information on the explored area. This algorithm is various reasons aerial vehicles can be considered as
experimentally evaluated on a hexacopter. In [150] a a challenging testbed for computer vision applica-
complete aerial platform setup has been developed for tions compared to conventional robots. The dimen-
river mapping. The proposed work employs a stereo sions of the aircraft’s state is usually larger from
camera and a laser scanner for the mapping, obstacle the ones of a mobile robot, while the image pro-
detection and state estimation. Two exploration algo- cessing algorithms have to provide visual information
rithms have been tested, a follow the river in stable robustly in real time and should be able to compen-
flight modification of Sparse Tangential Network, and sate for difficulties like rough changes in the image
secondly maximize the river length that is covered dur- sequence and 3D information changes in visual ser-
ing mission with experimental evaluations. In [151] voing applications. Despite the fact that the computer
coverage algorithm for ground areas from fixed wing vision society has developed elaborate SLAM algo-
UAVs has been proposed. The novelty of this work rithms for visual applications, the majority of them,
stands in the consideration of practical problems in the cannot be utilized for UAV’s directly due to limita-
coverage mission. More specifically, the size of the tions posed by their architecture and their processing
UAV deployed team is a function of the size and shape power. More specifically aircrafts have a maximum
of the area as well as the flight time of the platform. limit in generating thrust in order to remain airborne,
The developed algorithm consists of two parts, mod- which restricts the available payload for sensing and
elling the area coordinates in a graph in a way that a computing power. The fast dynamics of aerial plat-
single agent covers the area in a minimum time and forms demand minimum delays and noise compen-
secondly an optimization step is performed to define sation in state computations in order to avoid insta-
the routes for the team of aerial platforms for the cov- bilities. Furthermore, it should be noted that unlike
erage. In [152] an aerial platform with localization, the case of ground vehicles, UAVs cannot just stop
mapping and path planning capabilities in 3D has been operating when there is great uncertainty in the state
162 J Intell Robot Syst (2017) 87:141–168
estimation, a fact that could generate incoherent con- and system simplifications or remain in the simula-
trol commands to the aerial vehicle and make it tion stage. In addition, their performance is constantly
unstable. In case that the computational power is not evaluated and improved so more and more approaches
enough to update the velocity and attitude in time are introduced. Therefore, seminal engineering work
or there is a hardware-mechanical failure, the UAV is essential to take the current state of the art a step
could have unpredictable behaviour, increase/decrease further and evaluate their performance in actual flight
speed, oscillate and eventually crash. Computer vision tests. Another finding from this survey is the fact that
algorithms should be able to respond very quickly to most experimental trials, reported in the presented lit-
scene changes (dynamic scenery), a consequence from erature, were performed on unmanned vehicles with
UAVs native ability to operate in various altitudes and an increased payload for sensory systems and onboard
orientations, which results in sudden appearance and processing units. Nonetheless, it is clear that current
disappearance of obstacles and targets. An important research is focused on miniature aerial vehicles that
assumption that the majority of the presented con- can operate indoors, outdoors and target infrastructure
tributions consider, is the fact that the vehicles fly inspection and maintenance using their agile maneu-
in low speeds in order to compensate the fast scene vering capabilities. Finally, it should be highlighted
alterations. In other words, dynamic scenery poses a that it was not feasible to perform adequate com-
significant problem to overcome. Another challenge in parison on the presented algorithms due to the lack
SLAM frameworks that should be taken into account of proper benchmarking tools and metrics for nav-
is the fact that comparing to ground vehicles, aerial igation and guidance topics [18]. Many approaches
platforms cover large areas, meaning that they build are application driven and their characteristics and
huge maps that contain more information. Object needs differ. Therefore a common basis should be
tracking methods should be robust against occlusions, established within research community.
image noise, vehicle disturbances and illumination
variations while pursuing the target. As long as the 5.2 Camera Sensors
target remains inside the field of view but it is either
occluded from another object or is not clearly visible This review article is focused on research work
from the sensor, is crucial for the tracker to keep oper- towards vision based autonomous aerial vehicles.
ating, to estimate the target’s trajectory, recover the Therefore an important factor that should be consid-
process and function in harmony with the UAV con- ered is the visual system used in individual papers.
trollers. Therefore the need for further, highly sophis- Throughout the review process 3 visual sensor types
ticated and robust control schemes exists, to optimally have mainly been distinguished. A BlueFox monoc-
close the loop using visual information. ular camera from MatrixVision, the VI sensor stereo
Nowadays, the integration of computer vision camera from Skybotix and Asus Xtion Pro a RGB-
applications on UAVs has past it’s infancy and without D sensor. The aforementioned sensors cover a great
any doubt there have been made huge steps towards range of applications depending on the individual
understanding and approaching autonomous aircrafts. requirements. Regarding the utilized hardware, this
The subject of UAVs’ control is a well studied field, survey will not provide more information, since in the
since various position, attitude, and rate controllers most of the referenced articles, the results are being
have been already proposed, while currently there is a discussed in relation to the hardware utilized.
significantly large focus of the research community on
this topic. Thus, it is important to establish a reliable 5.3 Future Trends
link between vision algorithms and control theory to
reach greater levels of autonomy. The research work UAVs possess some powerful characteristics, which
presented in this review, indicates that some techniques in the near future potentially could turn them into
are experimentally proved but many of visual servoing, the pioneering elements in many applications. Char-
SLAM and object tracking strategies for autonomous acteristics like the versatile movement, combined with
UAVs are not yet fully integrated in their navigation special features, like the lightweight chassis and the
controllers, since the presented approaches either work onboard sensors could open a world of possibilities
under some assumptions in simple experimental tests and these are the reasons why UAVs have gained
J Intell Robot Syst (2017) 87:141–168 163
so much attention in research. Nowadays, the scien- PTAM Parallel Tracking and Mapping
tific community is focused in finding more efficient MAV Micro Aerial Vehicle
schemes for using visual servoing techniques, develop
SLAM algorithms for online - accurate localization Acknowledgments This work has received funding from the
and detailed dense 3D reconstruction, propose novel European Unions Horizon 2020 Research and Innovation Prog-
path planning methods for obstacle free navigation ramme under the Grant Agreement No.644128, AEROWORKS.
and integrate aerial trackers, for real scenario indoors
and outdoors applications. Moreover, nowadays many Open Access This article is distributed under the terms of the
resources are distributed in visual-inertial state estima- Creative Commons Attribution 4.0 International License (http://
creativecommons.org/licenses/by/4.0/), which permits unre-
tion to combine advantages from both research areas. stricted use, distribution, and reproduction in any medium,
The evolution of processing power on board aerial provided you give appropriate credit to the original author(s)
agents will open new horizons in the field and define and the source, provide a link to the Creative Commons license,
reliable visual-inertial state estimation as the standard and indicate if changes were made.
procedure and the basic element of every agent. Addi-
tionally, elaborate schemes for online mapping will be References
studied and refined for dynamic environments. More-
over, there is ongoing research on equipping UAVs 1. U.S Department of Transportation: Federal Aviation
Administration. https://www.faa.gov/uas/faqs/
with robotic arms/tools in order to extend their capa-
2. U.K Ministry of Defence: Unmanned Aircraft Systems:
bilities in aerial manipulation for various tasks like Terminology, Definitions and Classification
maintenance. The upcoming trends will examine float- 3. U.S. Department of Defense: Standard practice for system
ing base manipulators towards task completion in safety. MIL-STD-882D (2000)
either single or collaborative manner. Operating an 4. Huang, H.-M.: Autonomy levels for unmanned systems
(ALFUS) framework, volume I: Terminology, Version 2.0
aerial vehicle with a manipulator is not a straight- (2008)
forward process and many challenges exist, like the 5. Valavanis, K.P.: Advances in Unmanned Aerial Vehicles:
compensation for the varying Center Of Gravity and State of the Art and the Road to Autonomy, vol. 33.
the external disturbances from the interaction, capa- Springer Science & Business Media (2008)
6. YAMAHA: RMAX. http://rmax.yamaha-motor.com.au/
bilities that are posing demanding vision based tasks features/
and that are expected to revolutionize the current uti- 7. Ascending Technologies: AscTec NEO. http://www.
lization of UAVs. Finally, there is also great interest asctec.de/en/uav-uas-drones-rpas-roav/asctec-firefly/
in cooperative operation of multiple aerial platforms 8. ShadowAir: Super Bat ShadowAir. http://www.shadowair.
com
and mostly for distributed solutions were the agents 9. Association Unmanned Aerial Vehicle Systems: Civil
act individually exchanging information among them and Commercial UAS Applications. https://www.uavs.
to fulfill specific constraints. Aerial robotic swarms org/commercial
is the future for many applications such as inspec- 10. Mejias, L., Correa, J.F., Mondragón, I., Campoy, P.: Col-
ibri: A vision-guided uav for surveillance and visual
tion, search and rescue missions as well as farming, inspection (2007)
transportation and mining processes. 11. Araar, O., Aouf, N.: A new hybrid approach for the
visual servoing of vtol uavs from unknown geometries.
In: IEEE 22nd Mediterranean Conference of Control and
Automation (MED), pp. 1425–1432. IEEE (2014)
List of Acronyms 12. Carrillo, L.R.G., López, A.E.D., Lozano, R., Pégard, C.:
Combining stereo vision and inertial navigation system for
UAV Unmanned Aerial Vehicles a quad-rotor uav. J. Intelli. Robot. Syst. 65(1-4), 373–387
EKF Extended Kalman Filter (2012)
13. Max Botix: XL-MaxSonar-EZ4 Ultrasonic Sensor. http://
GPS Global Positioning System www.maxbotix.com
INS Inertial Navigation System 14. SkyBotix AG: VI sensor. http://www.skybotix.com/
IMU Inertial Measurement Unit 15. TeraRanger: TeraRanger Rotating Lidar. http://www.
IBVS Image Based Visual Servoing teraranger.com/products/teraranger-lidar/
16. Matrix Vision: mvBlueFOX3 Camera. https://www.matrix-
PBVS Position Based Visual Servoing vision.com/USB3-vision-camera-mvbluefox3.html
VTOL Vertical Takeoff and Landing Vehicle 17. Szeliski, R.: Computer Vision: Algorithms and Applica-
SLAM Simultaneous Localization and Mapping tions. Springer Science & Business Media (2010)
164 J Intell Robot Syst (2017) 87:141–168
18. Kendoul, F.: Survey of advances in guidance, naviga- IEEE/RSJ International Conference on Intelligent Robots
tion, and control of unmanned rotorcraft systems. J. Field and Systems (IROS), pp. 596–601. IEEE (2013)
Robot. 29(2), 315–378 (2012) 35. Fahimi, F., Thakur, K.: An alternative closed-loop vision-
19. Hutchinson, S., Hager, G.D., Corke, P.I.: A tutorial on based control approach for unmanned aircraft systems
visual servo control. IEEE Trans. Robot. Autom. 12(5), with application to a quadrotor. In: International Con-
651–670 (1996) ference on Unmanned Aircraft Systems (ICUAS), pp.
20. Corke, P.: Robotics, Vision and Control: Fundamental 353–358. IEEE (2013)
Algorithms in MATLAB, vol. 73. Springer Science & 36. Lee, D., Ryan, T., Kim, H.J.: Autonomous landing of a
Business Media (2011) vtol uav on a moving platform using image-based visual
21. Asl, H.J., Oriolo, G., Bolandi, H.: An adaptive scheme servoing. In: IEEE International Conference on Robotics
for image-based visual servoing of an underactuated uav. and Automation (ICRA), pp. 971–976. IEEE (2012)
IEEE Trans. Robot. Autom. 29(1) (2014) 37. Huh, S., Shim, D.H.: A vision-based automatic landing
22. Ozawa, R., Chaumette, F.: Dynamic visual servoing with method for fixed-wing uavs. J. Intell. Robot. Syst. 57(1–
image moments for a quadrotor using a virtual spring 4), 217–231 (2010)
approach. In: IEEE International Conference on Robotics 38. Salazar, S., Romero, H., Gomez, J., Lozano, R.: Real-
and Automation (ICRA), pp. 5670–5676. IEEE (2011) time stereo visual servoing control of an uav having
23. Araar, O., Aouf, N.: Visual servoing of a quadrotor uav eight-rotors. In: 6th International Conference on Electrical
for autonomous power lines inspection. In: 22nd Mediter- Engineering, Computing Science and Automatic Control
ranean Conference of Control and Automation (MED), pp. (CCE), pp. 1–11. IEEE (2009)
1418–1424. IEEE (2014) 39. Dib, A., Zaidi, N., Siguerdidjane, H.: Robust control and
24. Azinheira, J.R., Rives, P.: Image-based visual servoing for visual servoing of an uav. In: 17th IFAC World Congress
vanishing features and ground lines tracking: Application 2008, pp. CD–ROM (2008)
to a uav automatic landing. Int. J. Optomechatron. 2(3), 40. Kendoul, F., Fantoni, I., Nonami, K.: Optic flow-based
275–295 (2008) vision system for autonomous 3d localization and con-
25. Sa, I., Hrabar, S., Corke, P.: Inspection of pole-like struc- trol of small aerial vehicles. Robot. Autonom. Syst. 57(6),
tures using a vision-controlled vtol uav and shared auton- 591–602 (2009)
omy. In: IEEE/RSJ International Conference on Intelligent 41. Eberli, D., Scaramuzza, D., Weiss, S., Siegwart, R.: Vision
Robots and Systems (IROS), pp. 4819–4826. IEEE (2014) based position control for mavs using one single circular
26. Mills, S.J., Ford, J.J., Mejı́as, L.: Vision based control landmark. J. Intell. Robot. Syst. 61(1–4), 495–512 (2011)
for fixed wing uavs inspecting locally linear infrastruc- 42. Kendoul, F., Fantoni, I., Lozano, R.: Adaptive vision-
ture using skid-to-turn maneuvers. J. Intell. Robot. Syst. based controller for small rotorcraft uavs control and guid-
61(1–4), 29–42 (2011) ance. In: Proceedings of the 17th IFAC world congress,
27. Peliti, P., Rosa, L., Oriolo, G., Vendittelli, M.: Vision- pp. 6–11 (2008)
based loitering over a target for a fixed-wing uav. In: 43. Lange, S., Sunderhauf, N., Protzel, P.: A vision based
Proceedings of the 10th International IFAC Symposium onboard approach for landing and position control of an
on Robot Control (2012) autonomous multirotor uav in gps-denied environments.
28. Guenard, N., Hamel, T., Mahony, R.: A practical visual In: International Conference on Advanced Robotics, 2009.
servo control for an unmanned aerial vehicle. IEEE Trans. ICAR 2009, pp. 1–6. IEEE (2009)
Robot. 24(2), 331–340 (2008) 44. Alkowatly, M.T., Becerra, V.M., Holderbaum, W.: Bioin-
29. Metni, N., Hamel, T.: A uav for bridge inspection: Visual spired autonomous visual vertical control of a quadrotor
servoing control law with orientation limits. Autom. Con- unmanned aerial vehicle. J. Guid. Control Dyn., 1–14
struct. 17(1), 3–10 (2007) (2014)
30. Hamel, T., Mahony, R.: Image based visual servo control 45. Ghadiok, V., Goldin, J., Ren, W.: On the design and devel-
for a class of aerial robotic systems. Automatica 43(11), opment of attitude stabilization, vision-based navigation,
1975–1983 (2007) and aerial gripping for a low-cost quadrotor. Autonom.
31. Chriette, A.: An analysis of the zero-dynamics for visual Robots 33(1–2), 41–68 (2012)
servo control of a ducted fan uav. In: IEEE International 46. Fucen, Z., Haiqing, S., Hong, W.: The object recogni-
Conference on Robotics and Automation (ICRA), pp. tion and adaptive threshold selection in the vision system
2515–2520. IEEE (2006) for landing an unmanned aerial vehicle. In: International
32. Le Bras, F., Mahony, R., Hamel, T., Binetti, P.: Adaptive Conference on Information and Automation (ICIA), pp.
filtering and image based visual servo control of a ducted 117–122. IEEE (2009)
fan flying robot. In: 45th IEEE Conference on Decision 47. Zhao, Y., Pei, H.: An improved vision-based algorithm
and Control, pp. 1751–1757. IEEE (2006) for unmanned aerial vehicles autonomous landing. Phys.
33. Kim, S., Choi, S., Lee, H., Kim, H.J.: Vision-based col- Proced. 33, 935–941 (2012)
laborative lifting using quadrotor uavs. In: 14th Interna- 48. Artieda, J., Sebastian, J.M., Campoy, P., Correa, J.F.,
tional Conference on Control, Automation and Systems Mondragón, I.F., Martı́nez, C., Olivares, M.: Visual 3-d
(ICCAS), pp. 1169–1174. IEEE (2014) slam from uavs. J. Intell. Robot. Syst. 55(4–5), 299–321
34. Barajas, M., Dávalos-Viveros, J.P., Garcia-Lumbreras, S., (2009)
Gordillo, J.L.: Visual servoing of uav using cuboid model 49. Faessler, M., Fontana, F., Forster, C., Mueggler, E.,
with simultaneous tracking of multiple planar faces. In: Pizzoli, M., Scaramuzza, D.: Autonomous, vision-based
J Intell Robot Syst (2017) 87:141–168 165
flight and live dense 3d mapping with a quadrotor micro 65. Wang, C.L., Wang, T.M., Liang, J.H., Zhang, Y.C., Zhou,
aerial vehicle. J. Field Robot. (2015) Y.: Bearing-only visual slam for small unmanned aerial
50. Fraundorfer, F., Heng, L., Honegger, D., Lee, G.H., vehicles in gps-denied environments. Int. J. Autom. Com-
Meier, L., Tanskanen, P., Pollefeys, M.: Vision-based put. 10(5), 387–396 (2013)
autonomous mapping and exploration using a quadrotor 66. Nemra, A., Aouf, N.: Robust cooperative uav visual slam.
mav. In: IEEE/RSJ International Conference on Intelligent In: IEEE 9th International Conference on Cybernetic
Robots and Systems (IROS), pp. 4557–4564. IEEE (2012) Intelligent Systems (CIS), pp. 1–6. IEEE (2010)
51. Schmid, K., Lutz, P., Tomić, T., Mair, E., Hirschmüller, 67. Min, J., Jeong, Y., Kweon, I.S.: Robust visual lock-on
H.: Autonomous vision-based micro air vehicle for indoor and simultaneous localization for an unmanned aerial
and outdoor navigation. J. Field Robot. 31(4), 537–570 vehicle. In: IEEE/RSJ International Conference on Intel-
(2014) ligent Robots and Systems (IROS), pp. 93–100. IEEE
52. Harmat, A., Trentini, M., Sharf, I.: Multi-camera tracking (2010)
and mapping for unmanned aerial vehicles in unstructured 68. Jama, M., Schinstock, D.: Parallel tracking and mapping
environments. J. Intell. Robot. Syst., 1–27 (2014) for controlling vtol airframe. J. Control Sci. Eng. 2011, 26
53. Leishman, R.C., McLain, T.W., Beard, R.W.: Relative (2011)
navigation approach for vision-based aerial gps-denied 69. Törnqvist, D., Schön, T.B., Karlsson, R., Gustafsson, F.:
navigation. J. Intell. Robot. Syst. 74(1–2), 97–111 (2014) Particle filter slam with high dimensional vehicle model.
54. Forster, C., Pizzoli, M., Scaramuzza, D.: Svo: Fast semi- J. Intell. Robot. Syst 55(4–5), 249–266 (2009)
direct monocular visual odometry. In: 2014 IEEE Interna- 70. Harris, C., Stephens, M.: A combined corner and edge
tional Conference on Robotics and Automation (ICRA), detector. In: Alvey vision conference, vol. 15, p. 50.
pp. 15–22. IEEE (2014) Manchester (1988)
55. Forster, C., Faessler, M., Fontana, F., Werlberger, M., 71. Montemerlo, M., Thrun, S., Koller, D., Wegbreit, B., et al.:
Scaramuzza, D.: Continuous on-board monocular-vision- Fastslam: A factored solution to the simultaneous localiza-
based elevation mapping applied to autonomous landing tion and mapping problem. In: AAAI/IAAI, pp. 593–598
of micro aerial vehicles. In: 2015 IEEE International (2002)
Conference on Robotics and Automation (ICRA), pp. 72. Bryson, M., Sukkarieh, S.: Building a robust implementa-
111–118. IEEE (2015) tion of bearing-only inertial slam for a uav. J. Field Robot.
56. Lynen, S., Achtelik, M.W., Weiss, S., Chli, M., Siegwart, 24(1–2), 113–143 (2007)
R.: A robust and modular multi-sensor fusion approach 73. Kim, J., Sukkarieh, S.: Real-time implementation of air-
applied to mav navigation. In: 2013 IEEE/RSJ Interna- borne inertial-slam. Robot. Autonom. Syst. 55(1), 62–71
tional Conference on Intelligent Robots and Systems, pp. (2007)
3923–3929. IEEE (2013) 74. Liming Luke Chen, D.R.M., Dr Matthias Steinbauer, P.,
57. Pizzoli, M., Forster, C., Scaramuzza, D.: Remode: Prob- Mossel, A., Leichtfried, M., Kaltenriner, C., Kaufmann,
abilistic, monocular dense reconstruction in real time. H.: Smartcopter: Enabling autonomous flight in indoor
In: 2014 IEEE International Conference on Robotics and environments with a smartphone as on-board processing
Automation (ICRA), pp. 2609–2616. IEEE (2014) unit. Int. J. Pervas. Comput. Commun. 10(1), 92–114
58. Fu, C., Olivares-Mendez, M.A., Suarez-Fernandez, R., (2014)
Campoy, P.: Monocular visual-inertial slam-based colli- 75. Yang, J., Dani, A., Chung, S.J., Hutchinson, S.: Inertial-
sion avoidance strategy for fail-safe uav using fuzzy logic aided vision-based localization and mapping in a river-
controllers. J. Intell. Robot. Syst. 73(1–4), 513–533 (2014) ine environment with reflection measurements. In: AIAA
59. Wang, T., Wang, C., Liang, J., Zhang, Y.: Rao- Guidance, Navigation, and Control Conference. Boston
blackwellized visual slam for small uavs with vehicle (2013)
model partition. Indus. Robot: Int. J. 41(3), 266–274 76. Zhang, R., Liu, H.H.: Vision-based relative altitude esti-
(2014) mation of small unmanned aerial vehicles in target local-
60. Lowe, D.G.: Distinctive image features from scale- ization. In: American Control Conference (ACC), 2011,
invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 pp. 4622–4627. IEEE (2011)
(2004) 77. Nourani-Vatani, N., Pradalier, C.: Scene change detection
61. Magree, D., Johnson, E.N.: Combined laser and vision- for vision-based topological mapping and localization. In:
aided inertial navigation for an indoor unmanned aerial IEEE/RSJ International Conference on Intelligent Robots
vehicle. In: American Control Conference (ACC), pp. and Systems (IROS), pp. 3792–3797. IEEE (2010)
1900–1905. IEEE (2014) 78. Canny, J.: A computational approach to edge detection.
62. Cover, T., Hart, P.: Nearest neighbor pattern classification. Trans. Pattern Anal. Mach. Intell. 6, 679–698 (1986)
IEEE Trans. Inf. Theory 13(1), 21–27 (1967) 79. Caballero, F., Merino, L., Ferruz, J., Ollero, A.:
63. Mahalanobis, P.C.: On the generalised distance in statis- Unmanned aerial vehicle localization based on monocu-
tics 2(1), 49–55 (1936) lar vision and online mosaicking. J. Intell. Robot. Syst.
64. Huh, S., Shim, D.H., Kim, J.: Integrated navigation system 55(4–5), 323–343 (2009)
using camera and gimbaled laser scanner for indoor and 80. Lee, S.J., Kim, J.H.: Development of a quadrocoptor
outdoor autonomous flight of uavs. In: IEEE/RSJ Inter- robot with vision and ultrasonic sensors for distance sens-
national Conference on Intelligent Robots and Systems ing and mapping. In: Robot Intelligence Technology and
(IROS), pp. 3158–3163. IEEE (2013) Applications 2012, pp. 477–484. Springer (2013)
166 J Intell Robot Syst (2017) 87:141–168
81. Engel, J., Sturm, J., Cremers, D.: Scale-aware naviga- Annual International Conference on Cyber Technology in
tion of a low-cost quadrocopter with a monocular camera. Automation, Control and Intelligent Systems (CYBER),
Robot. Autonom. Syst. 62(11), 1646–1656 (2014) pp. 165–169. IEEE (2013)
82. Chowdhary, G., Johnson, E.N., Magree, D., Wu, A., 96. Gosiewski, Z., Ciesluk, J., Ambroziak, L.: Vision-based
Shein, A.: Gps-denied indoor and outdoor monocular obstacle avoidance for unmanned aerial vehicles. In: 4th
vision aided navigation and control of unmanned aircraft. International Congress on Image and Signal Processing
J. Field Robot. 30(3), 415–438 (2013) (CISP), vol. 4, pp. 2020–2025. IEEE (2011)
83. Zhang, X., Xian, B., Zhao, B., Zhang, Y.: Autonomous 97. Yuan, C., Recktenwald, F., Mallot, H.A.: Visual steering of
flight control of a nano quadrotor helicopter in a gps- uav in unknown environments. In: IEEE/RSJ International
denied environment using on-board vision. IEEE Trans. Conference on Intelligent Robots and Systems (IROS), pp.
Ind. Electron. 62(10), 6392–6403 (2015) 3906–3911. IEEE (2009)
84. Harmat, A., Trentini, M., Sharf, I.: Multi-camera tracking 98. Shah, S.I.A., Johnson, E.N.: 3d obstacle detection using a
and mapping for unmanned aerial vehicles in unstruc- single camera. In: AIAA guidance, navigation, and control
tured environments. J. Intell. Robot. Syst. 78(2), 291–317 conference (AIAA), vol. 5678 (2009)
(2015) 99. Lee, J.O., Lee, K.H., Park, S.H., Im, S.G., Park, J.: Obsta-
85. Bloesch, M., Omari, S., Hutter, M., Siegwart, R.: Robust cle avoidance for small uavs using monocular vision.
visual inertial odometry using a direct ekf-based approach. Aircraft Eng. Aeros. Technol. 83(6), 397–406 (2011)
In: IEEE/RSJ International Conference on Intelligent 100. Watanabe, Y., Fabiani, P., Le Besnerais, G.: Towards a uav
Robots and Systems (IROS), 2015, pp. 298–304. IEEE visual air-to-ground target tracking in an urban environment
(2015) 101. Watanabe, Y., Lesire, C., Piquereau, A., Fabiani, P.,
86. Shen, S., Mulgaonkar, Y., Michael, N., Kumar, V.: Vision- Sanfourche, M., Le Besnerais, G.: The onera ressac
based state estimation for autonomous rotorcraft mavs unmanned autonomous helicopter: Visual air-to-ground tar-
in complex environments. In: 2013 IEEE International get tracking in an urban environment. In: American Heli-
Conference on Robotics and Automation (ICRA), pp. copter Society 66th Annual Forum (AHS 2010) (2010)
1758–1764. IEEE (2013) 102. Jian, L., Xiao-min, L.: Vision-based navigation and obsta-
87. Troiani, C., Martinelli, A., Laugier, C., Scaramuzza, cle detection for uav. In: International Conference on
D.: Low computational-complexity algorithms for vision- Electronics, Communications and Control (ICECC), pp.
aided inertial navigation of micro aerial vehicles. Robot. 1771–1774. IEEE (2011)
Autonom. Syst. 69, 80–97 (2015) 103. Byrne, J., Cosgrove, M., Mehra, R.: Stereo based obstacle
88. Loianno, G., Watterson, M., Kumar, V.: Visual inertial detection for an unmanned air vehicle. In: IEEE Interna-
odometry for quadrotors on se (3). In: 2016 IEEE Interna- tional Conference on Robotics and Automation (ICRA),
tional Conference on Robotics and Automation (ICRA), pp. 2830–2835. IEEE (2006)
pp. 1544–1551. IEEE (2016) 104. Yadav, V., Wang, X., Balakrishnan, S.: Neural network
89. Loianno, G., Thomas, J., Kumar, V.: Cooperative local- approach for obstacle avoidance in 3-d environments for
ization and mapping of mavs using rgb-d sensors. In: uavs. In: American Control Conference, pp. 6–pp. IEEE
2015 IEEE International Conference on Robotics and (2006)
Automation (ICRA), pp. 4021–4028. IEEE (2015) 105. Srinivasan, M.V., Thurrowgood, S., Soccol, D.: An opti-
90. Piasco, N., Marzat, J., Sanfourche, M.: Collaborative cal system for guidance of terrain following in uavs.
localization and formation flying using distributed stereo- In: International Conference on Video and Signal Based
vision. In: IEEE International Conference on Robotics and Surveillance (AVSS), pp. 51–51. IEEE (2006)
Automation. Stockholm (2016) 106. Hrabar, S., Sukhatme, G., Corke, P., Usher, K., Roberts,
91. Nieuwenhuisen, M., Droeschel, D., Beul, M., Behnke, J.: Combined optic-flow and stereo-based navigation of
S.: Obstacle detection and navigation planning for urban canyons for a uav. In: IEEE/RSJ International Con-
autonomous micro aerial vehicles. In: International Con- ference on Intelligent Robots and Systems (IROS), pp.
ference on Unmanned Aircraft Systems (ICUAS), pp. 3309–3316. IEEE (2005)
1040–1047. IEEE (2014) 107. Olivares-Mendez, M.A., Mejias, L., Campoy, P., Mellado-
92. Schmid, K., Tomic, T., Ruess, F., Hirschmuller, H., Suppa, Bataller, I.: Quadcopter see and avoid using a fuzzy con-
M.: Stereo vision based indoor/outdoor navigation for troller. In: Proceedings of the 10th International FLINS
flying robots. In: IEEE/RSJ International Conference on Conference on Uncertainty Modeling in Knowledge Engi-
Intelligent Robots and Systems (IROS), pp. 3955–3962. neering and Decision Making (FLINS 2012). World Sci-
IEEE (2013) entific (2012)
93. Heng, L., Meier, L., Tanskanen, P., Fraundorfer, F., 108. Mohammed, A.D., Morris, T.: An improved camshift
Pollefeys, M.: Autonomous obstacle avoidance and algorithm for object detection and extraction
maneuvering on a vision-guided mav using on-board pro- 109. Ahrens, S., Levine, D., Andrews, G., How, J.P.: Vision-
cessing. In: IEEE international conference on Robotics based guidance and control of a hovering vehicle in
and automation (ICRA), pp. 2472–2477. IEEE (2011) unknown, gps-denied environments. In: International
94. Magree, D., Mooney, J.G., Johnson, E.N.: Monocular Conference on Robotics and Automation (ICRA), pp.
visual mapping for obstacle avoidance on uavs. J. Intell. 2643–2648. IEEE (2009)
Robot. Syst. 74(1–2), 17–26 (2014) 110. Shi, J., Tomasi, C.: Good features to track. In: Com-
95. Xiaoyi, D., Qinhua, Z.: Research on laser-assisted odom- puter Society Conference on Computer Vision and Pattern
etry of indoor uav with monocular vision. In: 3rd Recognition (CVPR), pp. 593–600. IEEE (1994)
J Intell Robot Syst (2017) 87:141–168 167
111. Lucas, B.D., Kanade, T., et al.: An iterative image regis- 126. Tarhan, M., Altuġ, E.: A catadioptric and pan-tilt-zoom
tration technique with an application to stereo vision. In: camera pair object tracking system for uavs. J. Intell.
IJCAI, vol. 81, pp. 674–679 (1981) Robot. Syst. 61(1–4), 119–134 (2011)
112. Mcfadyen, A., Mejias, L., Corke, P., Pradalier, C.: Aircraft 127. Majidi, B., Bab-Hadiashar, A.: Aerial tracking of elon-
collision avoidance using spherical visual predictive con- gated objects in rural environments. Mach. Vis. Appl.
trol and single point features. In: IEEE/RSJ International 20(1), 23–34 (2009)
Conference on Intelligent Robots and Systems (IROS), pp. 128. Liu, X., Lin, Z., Acton, S.T.: A grid-based bayesian
50–56. IEEE (2013) approach to robust visual tracking. Digit. Signal Process.
113. Kim, Y., Jung, W., Bang, H.: Visual target tracking and 22(1), 54–65 (2012)
relative navigation for unmanned aerial vehicles in a gps- 129. Candamo, J., Kasturi, R., Goldgof, D.: Using color pro-
denied environment. Int. J. Aeronaut. Space Sci. 15(3), files for street detection in low-altitude uav video. In: SPIE
258–266 (2014) Defense, Security, and Sensing, pp. 73,070O–73,070O.
114. Price, A., Pyke, J., Ashiri, D., Cornall, T.: Real time International Society for Optics and Photonics (2009)
object detection for an unmanned aerial vehicle using an fpga 130. Pestana, J., Sanchez-Lopez, J.L., Saripalli, S., Campoy,
based vision system. In: International Conference on Robotics P.: Computer vision based general object following for
and Automation (ICRA), pp. 2854–2859. IEEE (2006) gps-denied multirotor unmanned vehicles. In: Ameri-
115. Jeon, B., Baek, K., Kim, C., Bang, H.: Mode changing can Control Conference (ACC), pp. 1886–1891. IEEE
tracker for ground target tracking on aerial images from (2014)
unmanned aerial vehicles (iccas 2013). In: 13th Interna- 131. Qadir, A., Semke, W., Neubert, J.: Vision based neuro-
tional Conference on Control, Automation and Systems fuzzy controller for a two axes gimbal system with small
(ICCAS), pp. 1849–1853. IEEE (2013) uav. J. Intell. Robot. Syst. 74(3–4), 1029–1047 (2014)
116. Rodriguez, J., Castiblanco, C., Mondragon, I., Colorado, 132. Mondragon, I.F., Campoy, P., Correa, J.F., Mejias, L.:
J.: Low-cost quadrotor applied for visual detection of Visual model feature tracking for uav control. In: IEEE
landmine-like objects. In: International Conference on International Symposium on Intelligent Signal Processing
Unmanned Aircraft Systems (ICUAS), pp. 83–88. IEEE (WISP), pp. 1–6. IEEE (2007)
(2014) 133. Zhao, S., Hu, Z., Yin, M., Ang, K.Z., Liu, P., Wang, F.,
117. Gu, A., Xu, J.: Vision based ground marker fast detec- Dong, X., Lin, F., Chen, B.M., Lee, T.H.: A robust real-
tion for small robotic uav. In: 5th IEEE International time vision system for autonomous cargo transfer by an
Conference on Software Engineering and Service Science unmanned helicopter. IEEE Trans. Ind. Electron. 62(2)
(ICSESS), pp. 975–978. IEEE (2014) (2015)
118. Zou, J.T., Tseng, Y.C.: Visual track system applied 134. Cichella, V., Kaminer, I., Dobrokhodov, V., Hovakimyan,
in quadrotor aerial robot. In: 2012 Third International N.: Coordinated vision-based tracking for multiple uavs.
Conference on Digital Manufacturing and Automation In: IEEE/RSJ International Conference on Intelligent
(ICDMA), pp. 1025–1028. IEEE (2012) Robots and Systems (IROS), 2015, pp. 656–661. IEEE
119. Teuliere, C., Eck, L., Marchand, E.: Chasing a moving (2015)
target from a flying uav. In: IEEE/RSJ International Con- 135. Lin, S., Garratt, M.A., Lambert, A.J.: Monocular vision-
ference on Intelligent Robots and Systems (IROS), pp. based real-time target recognition and tracking for
4929–4934. IEEE (2011) autonomously landing an uav in a cluttered shipboard
120. Zhou, J.: Ekf based object detect and tracking for uav by environment. Autonom Robots, 1–21 (2016)
using visual-attention-model. In: International Conference 136. Fu, C., Duan, R., Kircali, D., Kayacan, E.: Onboard robust
on Progress in Informatics and Computing (PIC), pp. 168– visual tracking for uavs using a reliable global-local object
172. IEEE (2014) model. Sensors 16(9), 1406 (2016)
121. Watanabe, Y., Fabiani, P., Le Besnerais, G.: Simultane- 137. Tomic, T., Schmid, K., Lutz, P., Domel, A., Kassecker,
ous visual target tracking and navigation in a gps-denied M., Mair, E., Grixa, I.L., Ruess, F., Suppa, M., Burschka,
environment. In: International Conference on Advanced D.: Toward a fully autonomous uav: Research platform for
Robotics (ICAR), pp. 1–6. IEEE (2009) indoor and outdoor urban search and rescue. IEEE Robot.
122. Saif, A.S., Prabuwono, A.S., Mahayuddin, Z.R.: Real time Autom. Mag. 19(3), 46–56 (2012)
vision based object detection from uav aerial images: A 138. Wang, T., Wang, C., Liang, J., Chen, Y., Zhang, Y.:
conceptual framework. In: Intelligent Robotics Systems: Vision-aided inertial navigation for small unmanned aerial
Inspiring the NEXT, pp. 265–274. Springer (2013) vehicles in gps-denied environments. Int. J. Adv. Robot.
123. Gaszczak, A., Breckon, T.P., Han, J.: Real-time people Syst. (2013)
and vehicle detection from uav imagery. In: IS&T/SPIE 139. Zhao, S., Lin, F., Peng, K., Chen, B.M., Lee, T.H.:
Electronic Imaging, pp. 78,780B–78,780B. International Homography-based vision-aided inertial navigation of
Society for Optics and Photonics (2011) uavs in unknown environments. In: AIAA Guidance, Nav-
124. Li, Z., Ding, J.: Ground moving target tracking control igation, and Control Conference (2012)
system design for uav surveillance. In: IEEE International 140. Cocchioni, F., Mancini, A., Longhi, S.: Autonomous nav-
Conference on Automation and Logistics, pp. 1458–1463. igation, landing and recharge of a quadrotor using arti-
IEEE (2007) ficial vision. In: International Conference on Unmanned
125. Maier, J., Humenberger, M.: Movement detection based Aircraft Systems (ICUAS), pp. 418–429. IEEE (2014)
on dense optical flow for unmanned aerial vehicles. Int. J. 141. Carrillo, L.R.G., Flores Colunga, G., Sanahuja, G.,
Adv. Robot. Syst. 10, 1–11 (2013) Lozano, R.: Quad rotorcraft switching control: An
168 J Intell Robot Syst (2017) 87:141–168
application for the task of path following. IEEE Trans. planning onboard mavs in unknown environments. In:
Control Syst. Technol. 22(4), 1255–1267 (2014) IEEE/RSJ International Conference on Intelligent Robots
142. Lee, D., Kim, Y., Bang, H.: Vision-aided terrain ref- and Systems (IROS), 2015, pp. 1872–1878. IEEE (2015)
erenced navigation for unmanned aerial vehicles using
ground features. Proc. Inst. Mech. Eng. Part G: J. Aeros. Christoforos Kanellakis is currently pursuing his Ph.D degree
Eng. 228(13), 2399–2413 (2014) within the Control Engineering Group, Department of Com-
143. Yol, A., Delabarre, B., Dame, A., Dartois, J.E., Marchand, puter Science, Electrical and Space Engineering, Luleå Uni-
E.: Vision-based absolute localization for unmanned aerial versity of Technology (LTU), Luleå, Sweden. He received his
vehicles. In: IEEE/RSJ International Conference on Intel- Diploma from the Electrical & Computer Engineering Depart-
ligent Robots and Systems (IROS), pp. 3429–3434. IEEE ment of the University of Patras (UPAT), Greece in 2015. He
(2014) currently works in the field of robotics, focusing on the com-
144. Tardif, J.P., George, M., Laverne, M., Kelly, A., Stentz, A.: bination of control and vision to enable robots perceive and
Vision-aided inertial navigation for power line inspection. interact with the environment.
In: 1st International Conference on Applied Robotics for
the Power Industry (CARPI), pp. 1–6 (2010)
145. Sanahuja, G., Castillo, P.: Embedded laser vision system George Nikolakopoulos is Professor on Robotics and Automa-
for indoor aerial autonomous navigation. J. Int. Robot. tion at the Department of Computer Science, Electrical and
Syst. 69(1–4), 447–457 (2013) Space Engineering at Luleå University of Technology, Luleå,
146. Tippetts, B.J., Lee, D.J., Fowers, S.G., Archibald, J.K.: Sweden. His work is focusing in the area of Robotics, Con-
Real-time vision sensor for an autonomous hovering trol Applications, while he has a significantly large experience
micro unmanned aerial vehicle. J. Aeros. Comput. Inf. in Creating and Managing European and National Research
Commun. 6(10), 570–584 (2009) Projects. In the past he has been working as project man-
147. Boṡnak, M., Matko, D., Blażiċ, S.: Quadrocopter hov- ager and principal investigator in Several R&D&I projects
ering using position-estimation information from inertial funded by the EU, ESA, Swedish and the Greek National
sensors and a high-delay video system. J. Intell. Robot. Ministry of Research. George is the coordinator of H2020-
Syst. 67(1), 43–60 (2012) ICT AEROWORKS project in the field of Aerial Collaborative
148. Frew, E.W., Langelaan, J., Stachura, M.: Adaptive plan- UAVs and H2020-SPIRE project DISIRE in the field of Inte-
ning horizon based on information velocity for vision- grated Process Control. In 2013 he has established the bigger
based navigation. In: AIAA Guidance, Navigation and outdoors motion capture system in Sweden and most probably
Controls Conference (2007) in Europe as part of the FROST Field Robotics Lab at Luleå
149. Bircher, A., Kamel, M., Alexis, K., Oleynikova, H., University of Technology. In 2014, he has been nominated
Siegwart, R.: Receding horizon next-best-view??? planner as LTU’s Wallenberg candidate, one out of three nominations
for 3d exploration. In: 2016 IEEE International Confer- from the university and 16 in total engineering nominations in
ence on Robotics and Automation (ICRA), pp. 1462– Sweden. In year 2003, George has received the Information
1468. IEEE (2016) Societies Technologies (IST) Prize Award for the best paper
150. Nuske, S., Choudhury, S., Jain, S., Chambers, A., Yoder, that Promotes the scopes of the European IST (currently known
L., Scherer, S., Chamberlain, L., Cover, H., Singh, S.: as ICT) sector. His publications in the field of UAVs have
Autonomous exploration and motion planning for an received top recognition from the related scientific community,
unmanned aerial vehicle navigating rivers. J. Field Robot. while have been several times listed in the TOP 25 most popu-
32(8), 1141–1162 (2015) lar publications in Control Engineering Practice from Elsevier.
151. Avellar, G.S., Pereira, G.A., Pimenta, L.C., Iscold, P.: In 2014 he has received the 2014 Premium Award for Best
Multi-uav routing for area coverage and remote sensing with Paper in IET Control Theory and Applications, Elsevier for the
minimum time. Sensors 15(11), 27,783–27,803 (2015) research work in the area of UAVs, His published scientific
152. Burri, M., Oleynikova, H., Achtelik, M.W., Siegwart, work includes more than 150 published International Journals
R.: Real-time visual-inertial mapping, re-localization and and Conferences in the fields of his interest.