0% found this document useful (0 votes)
50 views8 pages

Free As A Bird Event-Based Dynamic Sense-And-Avoid For Ornithopter Robot Flight

Uploaded by

Nisha Kamaraj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views8 pages

Free As A Bird Event-Based Dynamic Sense-And-Avoid For Ornithopter Robot Flight

Uploaded by

Nisha Kamaraj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

IEEE ROBOTICS AND AUTOMATION LETTERS, VOL. 7, NO.

2, APRIL 2022 5413

Free as a Bird: Event-Based Dynamic


Sense-and-Avoid for Ornithopter Robot Flight
Juan Pablo Rodríguez-Gómez , Raul Tapia , Maria del Mar Guzmán Garcia ,
Jose Ramiro Martínez-de Dios , and Anibal Ollero

Abstract—Autonomous flight of flapping-wing robots is a major


challenge for robot perception. Most of the previous sense-and-
avoid works have studied the problem of obstacle avoidance for
flapping-wing robots considering only static obstacles. This letter
presents a fully onboard dynamic sense-and-avoid scheme for large-
scale ornithopters using event cameras. These sensors trigger pixel
information due to changes of illumination in the scene such as those
produced by dynamic objects. The method performs event-by-event
processing in low-cost hardware such as those onboard small aerial
vehicles. The proposed scheme detects obstacles and evaluates
possible collisions with the robot body. The onboard controller
actuates over the horizontal and vertical tail deflections to execute Fig. 1. Image sequence of the E-Flap robot performing an obstacle avoidance
the avoidance maneuver. The scheme is validated in both indoor maneuver in an experiment.
and outdoor scenarios using obstacles of different shapes and sizes.
To the best of the authors’ knowledge, this is the first event-based
method for dynamic obstacle avoidance in a flapping-wing robot. and use measurements from external sensors such as motion
capture systems. We are interested in autonomous navigation
Index Terms—Collision avoidance, aerial systems: perception of ornithopter robots, and particularly in avoidance of dynamic
and autonomy, event camera, ornithopter, flapping-wing robot. obstacles. While static obstacles can often be assumed within
the map (and addressed through trajectory planning) or detected
with additional sensors such as LiDAR, this work deals with
I. INTRODUCTION the avoidance of unexpected dynamic obstacles in ornithopters,
LAPPING-WING robots, also known as ornithopters, which require fast onboard detection and avoidance in contrast
F have recently attracted significant R&D interest. They
can perform agile maneuvers [1] and combine flapping and
to the strict payload and resource constraints of these platforms.
The perception scheme is based on event cameras. They are
gliding modes to reduce energy consumption [2]. Besides, robust to motion blur and lighting conditions and have moderate
flapping-wing robots are often made of soft materials making weight and low energy consumption. Hence, they are suitable to
them less dangerous than multirotors in case of collision [3]. deal with the flapping-wing flight perception challenges [4]. Be-
Flapping-wing flight describes novel perception challenges dif- sides, event cameras are suitable for dynamic obstacle detection
ferent from those in multirotor flight. First, ornithopters generate by directly providing pixel information of moving objects in the
lift and thrust by flapping strokes, causing mechanical vibrations scene. Additionally, efficient event-based processing techniques
and wide abrupt movements that highly impact onboard percep- can provide estimates at very high rates. Many successful
tion [4]. Besides, they have strict payload and energy limitations, event-based perception techniques have been developed [7].
which strongly constrain the installation of sensors and This letter presents a dynamic obstacle sense-and-avoid
additional hardware, involving strict limitations on the onboard method for ornithopter robots. By processing only one onboard
processing capacity. In fact, most reported ornithopter percep- event camera, the robot rapidly detects dynamic obstacles and
tion and control methods, e.g., [5], [6], are executed offboard modifies its trajectory to avoid them, exploiting the low latency
of event cameras and the agility of ornithopters flight. Dy-
namic obstacles are segmented using the spatio-temporal event
Manuscript received September 9, 2021; accepted January 27, 2022. Date of
publication February 24, 2022; date of current version March 15, 2022. This let-
information from objects that move with a different velocity
ter was recommended for publication by Associate Editor Giuseppe Loianno and than the background. An event optical flow method estimates
Editor Pauline Pounds upon evaluation of the reviewers’ comments. This work its direction, and a reactive evasive maneuver strategy rapidly
was supported in part by the European Research Council as part of GRIFFIN evaluates and prevents collisions. The method is implemented
ERC Advanced Grant (Action 788247) (https://griffin-erc-advancedgrant.eu), for online execution in resource-constrained hardware and is
in part by the European Commission as part of AERIAL-CORE project (Grant
H2020-2019-871479), and in part by the Plan Estatal de Investigación Cientfica evaluated in the GRIFFIN E-Flap large-scale ornithopter [1],
y Técnica y de Innovación of the Ministerio de Universidades del Gobierno de see Fig. 1, in indoor and outdoor experiments. To the best of the
España (Grant FPU19/04692). (Corresponding author: Juan Pablo Rodríguez- authors’ knowledge, this is the first event-based obstacle avoid-
Gómez.) ance method designed for and validated in flapping-wing robots.
The authors are with the GRVC Robotics Lab Sevilla, Universidad de
Sevilla, 41004 Sevilla, Spain (e-mail: [email protected]; [email protected];
The main contributions of the letter are: 1) an event-based dy-
[email protected]; [email protected]; [email protected]). namic object motion estimation method designed to perform in
Digital Object Identifier 10.1109/LRA.2022.3153904 low-resource hardware; 2) a reactive obstacle avoidance method

2377-3766 © 2022 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.

horized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on August 30,2022 at 10:11:52 UTC from IEEE Xplore. Restrictions ap
5414 IEEE ROBOTICS AND AUTOMATION LETTERS, VOL. 7, NO. 2, APRIL 2022

for large-scale ornithopters providing low latency onboard TABLE I


perception and control; and 3) experimental validation indoors COMPARISON OF OUR APPROACH WITH OTHER DYNAMIC OBSTACLE
AVOIDANCE METHODS FOR QUADROTORS. E-BY-E REGARDS TO
and outdoors on a large-scale ornithopter. EVENT-BY-EVENT, E-IM TO EVENT IMAGES, AND E-PKG TO EVENT PACKAGE
This letter is organized as follows. Section II briefly summa-
rizes the main works in the topics addressed in the letter. The
general diagram of the event-based ornithopter dynamic obstacle
avoidance scheme and its main components are described in
Sections III and IV. Section V presents the experimental vali-
dation and robustness analyses. Section VI closes the letter and
highlights the main future steps.

II. RELATED WORK and propagated in time to predict collisions. An event-based


dodging system for quadrotors is presented in [9]. It performs
Reactive obstacle avoidance focuses on generating avoidance image deblurring, odometry estimation, and moving object seg-
robot actions without relying on globally consistent map infor- mentation using Deep Learning. Work [10] presents a dynamic
mation. It can be categorized into map-based and map-less ap- obstacle avoidance method for quadrotors that relays in event
proaches [11]. The first builds a local map to compute obstacle- motion compensation to detect events triggered by moving
free trajectories [12], while the second, aims at detecting nearby obstacles, and uses a potential field approach to execute the
obstacles and directly performs the avoidance action [8]. Map- evasive maneuver. These methods were designed for quadrotors
less methods provide faster obstacle avoidance and are suitable and are not suitable for ornithopters due to the strong differences
for platforms with limited processing capacity. A work analysing between both types of platforms.
the perception latency in a high-speed sense-and-avoid scenario Table I compares our approach and the methods in [8]–[10].
with a quadrotor is presented in [13]. The work in [14] presents First, none of these methods is designed for platforms that move
a map-less obstacle avoidance solution to avoid flying obstacles at medium-high velocities. They were validated on quadrotors
using a saliency-based reinforcement learning approach. hovering (or moving up to 1.5 m/s in the case of [10]), in
Recent advances in ornithopter development have led to the which the majority of the triggered events are caused by the
necessity of developing onboard perception methods capable obstacle’s motion, simplifying obstacle detection. Our method
of providing information for navigation, landing, and perch- has been designed for flapping-wing robots which require a
ing. Optical flow estimation onboard a Micro Aerial Vehicle minimum flight speed of 3 m/s for flight [1] and suffer from
(MAVs) flapping-wing robot is presented in [15]. The method various angular and linear motions and vibrations [4], which
sub-samples input images and uses a motion detection algo- trigger additional events caused by the static background, re-
rithm to compute the optical flow. Authors in [16] use the quiring specific moving obstacle detection methods. Moreover,
object appearance variation and optical flow to perform obstacle methods [8]–[10] require significantly higher onboard weight
avoidance with a monocular camera. Obstacle detection and and computational resources, which could not be mounted on
avoidance are computed in a ground station due to the weight a large-scale ornithopter. Methods [9] and [10] use a powerful
limitation of the platform. A stereo-vision obstacle avoidance embedded computer, a flight board, and an autopilot. Besides,
strategy for small-scale ornithopters is presented in [17]. The the three methods use two event cameras, which also increases
method computes sparse disparity maps from points with rel- their computational requirements. Although [10] includes a
atively high certainty for obstacle depth estimation. Obstacle solution with a monocular camera, it uses an additional board
avoidance is achieved through the droplet strategy defining the to run vision-based state estimation. Conversely, our method
necessary obstacle-free area in front of the robot to guarantee uses one only event camera and runs on a single lightweight
safe avoidance maneuvers. onboard computer with low computational capacity to satisfy the
The advantages offered by event cameras have increased the ornithopter payload restrictions while providing a fast response
research interest in computer vision and robotics communi- (control loop closing at 250 Hz) to deal with the high flight
ties [7]. Their temporal μs resolution and high dynamic range velocities. Finally, [9] and [10] segment moving objects by pro-
motivate their use for aerial robot perception. An event-based cessing event images resulting from accumulating the incoming
optical flow approach for autonomous MAV landing using a events. Hence, on these methods event processing starts after the
downwards orientated event camera is presented in [18]. The events have been accumulated, resulting in delays between event
event-based line tracker in [19] provides fast and stable ref- generation and processing, even in cases in which event image
erences for quadrotor visual servoing to perform bionspired processing times are lower than the event image accumulation
landing trajectories. The drone racing dataset [20] including times, such as in [10]. Conversely, our method adopts event-by-
event data intends to encourage the development of perception event processing exploiting the asynchronous nature of event
methods for high-speed drone maneuvers. The work in [21] generation and enabling shorter obstacle detection times, which
accumulates events to build event images in order to estimate the is interesting in our case due to the medium-high ornithopter
position and orientation of a quadrotor using a visual odometry flight speeds.
method and closing the loop for autonomous flight subject to
rotor failures. In [22], an auto-tuned event-based vision scheme
performs intruder detection on board an autonomous quadrotor III. GENERAL DESCRIPTION
in surveillance missions. Sense-and-avoid of agile aerial robots such as ornithopters
Recently, some event-based methods for reactive obstacle requires low latency perception for fast obstacle detection. Event
avoidance for quadrotors have been presented. Work [8] de- cameras provide visual information with μs resolution triggered
scribes an evasive method for dynamic obstacles using stereo by changes of illumination. These sensors have a high dynamic
event cameras on a quadrotor. The object trajectory is computed range (∼120 dB) and are robust to illumination conditions and

horized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on August 30,2022 at 10:11:52 UTC from IEEE Xplore. Restrictions ap
RODRÍGUEZ-GÓMEZ et al.: FREE AS A BIRD: EVENT-BASED DYNAMIC SENSE-AND-AVOID 5415

to motion blur, typically desired features in aerial robotics [22].


Previous works have explored the advantages of event cameras in
ornithopters [23], [24] and agile quadrotors [20]. Event cameras
are compact, have moderate weight, and report low-energy
consumption.
The development and integration of sense-and-avoid systems
for ornithopters entail additional requirements to those consid-
ered in other UAVs such as multirotors. First, ornithopters have
strict payload capacity and space restrictions which limit the in-
stallation of powerful processing hardware, mechanical stabiliz-
ers (e.g., gimbals), and sensors. Thus, onboard hardware is care- Fig. 2. General diagram of the proposed event-based sense-and-avoid scheme
fully selected to satisfy payload and weight balance restrictions for flapping-wing flight.
and enable real-time onboard processing. Moreover, flapping-
wing robots present complex kinematics and dynamics. They
are non-holonomic robots using few actuators to control their events triggered by dynamic objects. The Optical Flow module
position and orientation. They generate lift and thrust by flapping determines the direction of motion of the corners belonging to
their wings, also producing forward, backward, or lateral move- moving objects. Optical flow is computed only from corner
ments. These aspects set additional requirements for obstacle events to reduce the computational cost. Finally, Clustering
avoidance including low latency obstacle detection and evasion gathers optical flow measurements to compute the average flow
strategies that consider the platform kinematics and dynamics. of the detected dynamic objects. Algorithm 1 describes the
The general diagram of the proposed sense-and-avoid scheme proposed dynamic obstacle motion estimation method.
is shown in Fig. 2. The Dynamic Obstacle Motion Estimation Previous event-based methods for dynamic obstacle detec-
method, see Section IV-A, detects dynamic obstacles and esti- tion rely either on optimization methods [27] or motion com-
mates their motion in the image plane. It uses the spatio-temporal pensation techniques [10] to distinguish events belonging to
information of events triggered by dynamic objects moving moving objects. Our approach focuses on detecting events from
with a different velocity than the background. Although many dynamic objects performing low computational processing and
motion-based segmentation methods using traditional framed low latency response suitable for sense-and-avoid onboard or-
cameras have been proposed [25], the use of event-based vision nithopters. The Time Filter module detects events generated
has interesting advantages in our problem. First, event-based from moving objects using as reference the timestamp difference
processing provides natural robustness against motion blur and of the events triggered at the same pixel location. The surface
changes in lighting conditions. Further, motion segmentation of active events (SAE) S ∈ R2 maps the event coordinates x
using framed cameras process full frames, while event-based with the timestamp ts of the last occurring event at S(x). Thus,
methods only analyse asynchronous events enabling faster pro- S describes a 2D representation of the timestamp evolution of
cessing, 250 Hz in the experiments shown in Section V. triggered events. Under the assumption that objects move with
The Collision Risk Evaluation Strategy module evaluates a high relative velocity w.r.t. the robot, the events triggered by
the risk of collision considering the robot geometry, see dynamic objects are detected using a time threshold τ . If the
Section IV-B. If a collision risk situation is detected an avoidance time difference Δts between an incoming event ek and S(xk )
maneuver is performed. The optical flow of detected obstacles is is lower than τ the event is considered to belong to a moving
used to reactively command the ornithopter to avoid collisions. object. Next, the timestamp of ek updates S by S(xk ) = tsk for
The Tail Control method meets both robustness and simplicity the future evaluations.
requirements. It actuates on the vertical and horizontal tail The value of τ depends on the velocity of the robot, defined by
deflections to change the robot flight direction. The value of τ = ατvz + (1 − α)τωx , where τvz and τωx are the contributions
the control commands is adjusted using a simplification of the due to the current linear forward and pitch angular velocities
robot model to consider the dynamic constraints enclosed in of the robot (vz and ωx , respectively), and α ∈ [0, 1] sets the
the evasion maneuver. The event processing method leverages contribution of each velocity component. τvz and τωx are defined
ASAP [26], which adapts event packaging such that events are as follows:
     
processed as soon as possible while avoiding computational τv z τH (vz −vL )(vH −vL )−1
overflows, and ensures control closing at 250 Hz with onboard = −(τH − τL ) , (1)
τω x τH (ωx −ωL )(ωH −ωL )−1
resource-constrained hardware.
where [vL , vH ] and [ωL , ωH ] are the typical velocity ranges of the
robot flight. The operation range [τL , τH ] is empirically selected,
IV. METHODS see Section V. Fig. 4 shows an example of events belonging to
a moving object obtained by the Time Filter.
A. Dynamic Object Motion Estimation
Next, the events belonging to dynamic obstacles are processed
The proposed method performs event-by-event processing to to estimate their direction of motion, module Optical Flow in
exploit the asynchronous nature of event cameras. Each event is Fig. 3. That optical flow provides the relative motion estimation
defined by the tuple e = (x, ts, p), where x represents the pixel between the camera and the objects. Our approach uses the
coordinates (u, v), ts is the timestamp of the event and p is the event-based optical flow method ABMOF [28]. It is based on
polarity either positive or negative. The block diagram of the block matching operations between event slices. Slices are 2D
proposed method is shown in Fig. 3. The Time Filter module histograms of events collected during the accumulation time
detects events belonging to moving objects using as reference d. ABMOF includes two different control strategies to vary
the timestamp of the current and previous events. The event- d depending on the event generation through time. Despite
based Corner Detector module finds relevant features from the using slices of accumulated events to compute optical flow, it

horized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on August 30,2022 at 10:11:52 UTC from IEEE Xplore. Restrictions ap
5416 IEEE ROBOTICS AND AUTOMATION LETTERS, VOL. 7, NO. 2, APRIL 2022

Fig. 3. Block diagram of the Dynamic Object Motion Estimation method: (a) Time Filter detects events triggered by moving objects; (b) Corner Detector finds
relevant features; (c) Optical Flow estimates the motion of events in the image plane; and (d) Clustering estimates the mean object flow. For clearer visualization
the results are shown in event images accumulating events every 10 ms.

Fig. 4. Event-based dynamic object detection: right) frame from the APS
sensor of the DAVIS 346; left) event image of events accumulated during 10 ms
(in black) including events detected to belong to the moving object (magenta).

η 1
v: = v− v† , (3)
performs event-by-event processing by computing the flow of η−1 η−1
each incoming event. Our method integrates an adapted version
of ABMOF to reduce the onboard processing. First, event slices where η is the number of assigned tuples to the cluster (i.e., the
are fed only with events triggered from moving objects. This length of Φ). Similarly, the influence of old samples is removed
reduces the computational load by processing only <32% of the from v, see (3), where v† corresponds to flow samples with
event stream in the experiments of Section V. Second, the optical timestamps lower than τc . The parameter τc defines the lifetime
flow is computed only from events considered as corners. The of samples in the cluster. It is initially set to 5 ms and dynam-
*eFast event Corner Detector in [29] is selected for this task ically adjusted using the feedback reference from [28] in the
given its fast response and low False Positive Rate. Computing experiments of Section V. Finally, ASAP prevents processing
optical flow only from corners reduces the computational cost overflows by dynamically adapting event packaging to keep the
in >25% while providing a stable optical flow estimation as responsiveness of the method.
described in Section V.
Finally, the resulting event optical flow estimations are clus- B. Collision Risk Evaluation Strategy
tered to obtain an approximation of the object’s optical flow.
For this task, we used an adapted version of the event-by-event A reactive evasive maneuver strategy is used to prevent
clustering algorithm described in [30]. This algorithm clusters collision risk situations with incoming obstacles. The possible
events with spatio-temporal continuity within an adaptive time collisions with detected obstacles are determined by considering
window. The algorithm was modified to cluster optical flow the geometry of the robot. The ornithopter is approximated by a
from events. It receives as input the tuple f = (x, ts, v), where 2 W × 2H safety volume and the obstacle is approximated by a
v = (vx , vy ) represents the optical flow estimation of the event sphere of radius R (see Fig. 5). R encloses the obstacle volume
with pixel coordinates x and timestamp ts. Clustering is per- and a Safety Distance b considering small sensing uncertainties,
formed by evaluating the proximity of each new event to a i.e., R = R + b. Thus, the minimum angles ψ and θ to avoid
randomly selected event from each of the previous clusters. Each a possible collision risk without deviating the flight trajectory
cluster is defined by its centroid x, which represents the average are:
location of cluster events, the average optical flow v = (vx , vy ),
W + 2R H + 2R
and the list Φ of previous tuples f assigned to the cluster. Each ψ ∗ (t) = arctan 
, θ∗ (t) = arctan , (4)
new valid optical flow estimation updates v as in (2). z(t) − 2R z(t) − 2R

η 1 where z(t) is the obstacle depth w.r.t. the camera. For any pair of
v: = v+ v, (2) angles (ψ, θ) such that |ψ(t1 )| ≤ |ψ ∗ (t1 )| and |θ(t1 )| ≤ |θ∗ (t1 )|
η+1 η+1

horized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on August 30,2022 at 10:11:52 UTC from IEEE Xplore. Restrictions ap
RODRÍGUEZ-GÓMEZ et al.: FREE AS A BIRD: EVENT-BASED DYNAMIC SENSE-AND-AVOID 5417

Additionally, since the robot is solely controlled using onboard


perception –which implies high levels of uncertainty, obstacle
avoidance control must be robust to minimize their effect.
One feasible control strategy is to perform an avoidance
maneuver with an opposite direction velocity vector w.r.t. the
incoming obstacle velocity. From the obstacle mean optical flow
v estimated in Section IV-A, the controller computes the longi-
tudinal, δe , and lateral, δr , tail deflections to perform the evasive
maneuver. To accomplish both robust and computationally effi-
cient requirements, the following control law is implemented:
ureact = −(κ0 + κ1 v) ◦ v, (5)
where ◦ denotes the Hadamard product, ureact =
[δereact , δrreact ]T is the control action, and κ0 and κ1 are
controller gains. The values of κ0 and κ1 were experimentally
tuned. The followed criterion was to achieve fast control
response when an obstacle is detected.
The previous solution assumes that the best maneuver to avoid
an obstacle is to fly in the opposite direction to its velocity.
However, this strategy is agnostic to the robot dynamics. To
cope with this limitation, our method also considers a flapping-
wing linearized model adapted from [31] to compute the tail
Fig. 5. Collision risk evaluation based on geometry constraints. ψ ∗ and θ ∗ are deflections. The model parameters were initially approximated
used to determinate collision risk. empirically and then fitted by performing different tests inside
a motion capture system. The ornithopter linear and angular
velocities and its attitude are estimated using an onboard inertial
a possible collision between the robot and the obstacle might oc- navigation system.
cur at t ≥ t1 if the robot trajectory is not modified. In these cases, From the current robot configuration, and given ureact from
an evasive maneuver is activated by guiding the ornithopter (5), the adapted model is used to compute the robot acceleration
away from the obstacle trajectory. The collision evaluation of for each tail angle in a discretized range between δereact ± 10◦
(4) depends of the robot-obstacle depth z(t) and its size. Three for longitudinal deflection and between δrreact ± 10◦ for lat-
evaluation cases are considered based on the obstacle available eral deflection. The selected tail angle increments, Δumodel =
information: [Δδemodel , Δδrmodel ]T , are those that give the robot the greatest
r Assuming z(t) is directly measurable (e.g., using a time-of- acceleration –i.e., those that increase the speed of the evasion
flight sensor) and R is known, the collision risk evaluation maneuver the most in the shortest time. Hence, the tail deflec-
is directly implemented using (4). tions to command are composed as u = ureact + Δumodel .
r Assuming only R is known, z(t) can be estimated by Additionally, our method constrains the deflection values con-
z(t) = λR/L(t), where λ is the camera focal length, and sidering the restrictions presented in [32] to avoid stall.
L(t) is the largest side of a rectangle enclosing the clustered
events. The value of L(t) is computed by analysing the V. EXPERIMENTS
spatial distribution of events in the cluster.
r If neither z(t) nor R are known, the collision risk cannot be The experimental platform is the E-Flap robot, a customized
predicted. Thus, any detected obstacle triggers an evasive ornithopter developed at the GRVC Robotics Lab. The robot
maneuver. has a total length of 95 cm, a wingspan of 1.5 m, a weight
The second case, having prior information of the object ge- of 510 g, and a maximum payload of 520 g at the expense of
ometry to perform collision risk evaluation, is experimentally limited maneuverability and reduced flight time. The sensors
validated in Section V. We adopt a conservative strategy that and hardware are placed along the robot body to improve the
detects collision risk situations using geometrical considerations platform maneuverability. A low-cost Khadas VIM3 handles
and exerts reactive evasion maneuvers using the information onboard online perception and control. It equips a VectorNav
of the obstacle motion. This reduces processing requirements, VN-200 inertial navigation system that provides measurements
enabling low-latency execution. of the ornithopter’s body frame velocity. A DAVIS346 event
camera provides low latency events from its integrated dynamic
vision sensor DVS. The camera weight was reduced to a third
C. Tail Control of its original value to satisfy the robot weight restrictions.
Flapping-wing robots present more complex dynamics than The camera mounts a lens with a total weight of 5 g, a Field
multirotors and fixed-wing robots. Their non-holonomic under- of View of 68◦ horizontal and 53.5◦ vertical, and an IR-Cut
actuated nature requires considering the robot dynamic restric- filter to cope with the IR emissions from the motion capture
tions to evaluate the effect of the actuation commands in future system. The event-based obstacle detector method, the evasive
spatial configurations. Conversely, reactive obstacle avoidance maneuver strategy and the flapping-wing controller avoidance
requires a fast and robust response to perform aggressive move- were implemented in C++ using ROS. Despite our pipeline
ments. Evasion maneuvers must be performed as fast as possi- performs event-by-event processing, event camera drivers such
ble, thus the control method must be computationally simple. as [26] provide event packages instead of single events to prevent

horized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on August 30,2022 at 10:11:52 UTC from IEEE Xplore. Restrictions ap
5418 IEEE ROBOTICS AND AUTOMATION LETTERS, VOL. 7, NO. 2, APRIL 2022

Fig. 6. Different size objects considered for the experiments: left) small box; Fig. 7. Histogram of the distance between the detected object and the ground
center) stuffed toy; and right) FitBall. truth in the overall tests.

TABLE II
communication overflow. Due to the computational limitations ACCURACY, PRECISION, TRUE POSITIVE RATE (TPR), AND FALSE POSITIVE
RATE (FPR) RESULTS OF DYNAMIC OBSTACLE DETECTION USING THE
of the onboard computer, ASAP was configured to feed our DIFFERENT OBSTACLES WHILE VARYING THE LAUNCHING DISTANCE
algorithm with event packages at a maximum rate of 250 Hz,
without hampering event transmission or producing processing
overflow. Hence, our pipeline was configured to output v at the
same rate as the input packages.
The proposed dynamic obstacle sense-and-avoid system was
experimentally validated in both indoor and outdoor scenarios.
The GRVC Testbed is a closed area of 15 × 21 × 8 m with 24
OptiTrack Primex 13 cameras providing millimeter accuracy
pose estimations. The outdoor scenario is a Soccer Field of events due to the camera resolution, which hindered their 2D
48 × 54 m with surrounding obstacles suitable to perform short representation for obstacle detection. A total of 45 experiments
flight experiments with the ornithopter. For all the experiments, were performed with each obstacle.
the parameters in (1) were set to vL = 3 m/s, vH = 6 m/s, The detection performance was evaluated by comparing the
ωL = 1.3 rad/s, and ωH = 3 rad/s, given by the typical flight outcome of our algorithm with the frames provided by the APS
kinematics conditions of the robot. Further, α was set to 0.8, as sensor. The centroid of the detected moving object was rendered
the robot mainly performs forward motions. Parameter τ sets into event images obtained each 25 ms. The distance between the
the threshold to distinguish the events triggered by dynamic object’s centroid in both frames was used as evaluation criteria.
objects. A large value entails detections at longer distances Each comparison led to a possible result: False Positive (FP),
while permitting events triggered by static objects. Besides, False Negative (FN), True Positive (TP), and True Negative
a small value of τ performs a quite selective filtering at the (TN). A True Positive occurred when the Manhattan distance
expense of reducing the distance at which the obstacles are between the centroid of both objects was ≤η pixels. Fig. 7 shows
detected. Hence, τ defines a trade-off between filtering events a histogram of the distance between the object and the ground
triggered by static objects and the maximum distance to detect of truth in the performed experiments. The majority of samples
an obstacle. From our experiments, τL =15 ms and τH =25 ms at distances >15 px correspond to False Positives. Choosing η
were empirically selected to provide a trade-off between having to 10 was a suitable trade-off between validating the majority
detection distances of 6 m while filtering 82% of events triggered valid samples as True Positives while rejecting False Positive
by the scene background. samples. Next, the overall Accuracy, Precision, True Positive
The experimental validation was divided into two parts. First, Rate (TPR), and False Positive Rate (FPR) were computed.
the dynamic obstacle detection and motion estimation were Table II summarizes the detection results of each obstacle at
evaluated. Second, the evaluation of the full obstacle avoidance different distances. The method reported an overall accuracy of
system was performed in experiments in indoor and outdoor 91.7%. The distance directly affected the detection performance
scenarios with different illumination conditions. specially with small objects at distances >4 m. At such distances,
small objects triggered few events hampering object detection,
and the lack of triggered events affected the obstacle detection as
A. Obstacle Detection and Motion Estimation Evaluation
many events in S(x) did not satisfy the τ condition. In general,
In these experiments, obstacles were thrown in different at distances between 2 to 4 m the method reported an accuracy
directions into the Field of View (FoV) of the camera while above 94.7%. In this range, the size of the different objects in
the ornithopter performed forward flight. The experiments were the image was enough to perform successful dynamic object
performed in the Testbed to retrieve the pose information from detection in the majority of the experiments. Additionally, the
the obstacle and the robot. The goal was to evaluate the detection method provided a low number of False Positives as reported by
and motion estimations performance of the method described in the average FPR of 4.2%, and an average Precision of 95.1%.
Section IV-A. Three objects with different sizes were considered Finally, a few detections were missed during the experiments as
(Fig. 6): a Small Box of size 220 × 200 × 150 mm, a Stuffed Toy evidenced by the average TPR of 88.5%.
of size 400 × 450 × 400 mm, and a Fitball with a diameter of The obstacle detection method was evaluated in multi-
750 mm. Obstacle detection was evaluated in distances ranging obstacle experiments in which three objects were thrown from
from 0.5 m to 6 m. At distances closer than 0.5 m the obstacle different directions in the camera FoV at similar times. It re-
body filled a large zone of the image leading to invalid detections. ported an accuracy of 74.2% among all the experiments. The
At large distances obstacles were represented by few pixel performance reduction was caused by the generation of False

horized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on August 30,2022 at 10:11:52 UTC from IEEE Xplore. Restrictions ap
RODRÍGUEZ-GÓMEZ et al.: FREE AS A BIRD: EVENT-BASED DYNAMIC SENSE-AND-AVOID 5419

Fig. 8. Box plot (left) and mean and standard deviation (right) of the motion direction error along each of 30 experiments. The direction error is defined as the
instantaneous absolute difference between the direction of motion estimated by our approach and the obstacle direction from the ground truth. The mean error
considering all the experiments is 6.97◦ . The mean standard deviation considering all the experiments is 3.89◦ , and maximum instantaneous direction error is 19.54◦ .

Positive samples when the obstacles overlapped in the image TABLE III
plane, which tended to merge clusters hampering the individual SUCCESS RATE OF THE PROPOSED DYNAMIC OBSTACLE AVOIDANCE METHOD
IN DIFFERENT SCENARIOS
detection of objects. Despite this degradation, the method results
were satisfactory taking into account that the detection was
performed with a single event camera.
The motion estimation evaluation consisted of comparing the
mean optical flow of the dynamic obstacle v with the ground
truth measurements. The ground truth was obtained from the
motion capture system by projecting the position of the obstacle
in the image and estimating its motion direction from previous robot trajectories were estimated assuming the robot maintains
and future samples. For better validation, the obstacles were a forward motion. The experiments were performed indoors: the
launched from a distance of 8 m to describe larger trajectories. ground truth robot and obstacle positions were recorded using
A total of 30 experiments were performed. The error was defined a motion caption system to check if an ISV occurred. A total of
as the instantaneous angle difference between the estimated di- 25 experiments were performed with each object using regular
rection of the obstacle movement and the ground truth direction. (760 lx) and dark (<15 lx) illumination conditions. The average
Fig. 8 shows the quartiles, mean, and standard deviation error avoidance success rate with the three objects, see Table III,
along the object trajectory in each experiment. The resulting was 90.7%. The best result was obtained using the FitBall as
Root Mean Square Error was 11.2◦ , which is a reasonable error to obstacle, which larger size allowed an earlier detection of the
guide the robot in a collision-free direction. The Safety Distance collision risk situation. The experiments with dark illumination
b in Section IV-B was used to consider this error by enlarging the conditions also reported a remarkable success rate: the slight
obstacle geometry to add extra safety to the evasion maneuver. performance degradation was caused by the additional noisy
events produced by the poor lighting.
B. Sense-and-Avoid Evaluation Second, we performed experiments to analyze the system
performance in case ISV situations did not occur. 20 experiments
Next, the full proposed sense-and-avoid system was evalu- were performed using the Small Box obstacle. In 85% of the
ated. Two cases can be distinguished in case the obstacle detec- experiments, the system did not detect collision risk situations.
tion method fails. A False Positive obstacle detection triggers Only in 15% of the cases, it detected collision risk situations
an unnecessary evasion maneuver. A False Negative obstacle –activating an unnecessary evasive maneuver– in flights with no
detection neglects the evaluation of a collision risk situation, ISV situations. This result was mainly caused by the conservative
potentially causing an impact between the robot and the ob- selection of the radius R which enlarged the obstacle size to
stacle. Different sets of experiments were analysed to evaluate reduce impacts with the robot body. False Positive detections
its performance. The ornithopter was launched towards a goal increased in the dark lighting experiments due to the higher
zone by an operator and performed forward flight using the level of noisy events.
controller described in [5]. Obstacles were launched in various Finally, the system performance in outdoor experiments was
directions to intercept the ornithopter and evaluate different analyzed to evaluate its robustness to different scenarios. In these
evasive maneuvers. Lightweight obstacles were thrown using experiments, the collisions were evaluated visually due to the
a motorized launcher which sets their initial speed while larger lack of motion capture information. A total of 25 experiments
objects were manually launched. Obstacles were thrown from a were performed with each object under light and dark lighting
distance of 10 m and reached an average speed in the range of conditions. The results are shown in Table III. The proposed
[5, 8] m/s, producing relative speeds up to 11 m/s, see Table I. system had a success rate of 92.0% with regular illumination
Our system checked for collision risk situations and activated conditions, and reported acceptable results with dark lighting
an avoidance maneuver if a collision risk situation was detected. conditions 85.3%.
Three types of analyses were performed.
First, we performed experiments to analyse the performance
of the system in case an Intersection of the Safety Volumes VI. CONCLUSION AND FUTURE WORK
(ISV) occurred. An ISV exists when the robot safety volume and This letter presents the first event-based obstacle avoidance
the sphere of radius R enclosing the object intersect along the system for large-scale flapping-wing robots. The proposed ap-
robot trajectory in case it performs no evasion maneuver. These proach exploits the advantages of event-based vision to detect

horized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on August 30,2022 at 10:11:52 UTC from IEEE Xplore. Restrictions ap
5420 IEEE ROBOTICS AND AUTOMATION LETTERS, VOL. 7, NO. 2, APRIL 2022

dynamic obstacles and perform evasion maneuvers. Our scheme [14] Z. Ma, C. Wang, Y. Niu, X. Wang, and L. Shen, “A saliency-based
has been validated in several indoor and outdoor scenarios with reinforcement learning approach for a UAV to avoid flying obstacles,”
different illumination conditions. It reports an average avoidance Robot. Auton. Syst., vol. 100, pp. 108–118, 2018.
[15] F. G. Bermudez and R. Fearing, “Optical flow on a flapping wing robot,”
success rate of 89.7% evading dynamic obstacles of different in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2009, pp. 5027–5032.
sizes and shapes. Further, its event-by-event processing nature [16] G. de Croon, E. de Weerdt, C. de Wagter, and B. Remes, “The appearance
and efficient implementation allow fast onboard computation variation cue for obstacle avoidance,” in Proc. IEEE Int. Conf. Robot.
even in low processing capacity hardware, providing high rate Biomimetics, 2010, pp. 1606–1611.
[17] S. Tijmons, G. C. H. E. de Croon, B. D. W. Remes, C. De Wagter, and
estimations (250 Hz in our experiments). Future work includes M. Mulder, “Obstacle avoidance strategy using onboard stereo vision on
the validation of our method in other agile robots. Further, the de- a flapping wing MAV,” IEEE Trans. Robot., vol. 33, no. 4, pp. 858–874,
velopment of a map-based method for the avoidance of static ob- Aug. 2017.
stacles and its integration in a complete obstacle avoidance sys- [18] B. J. P. Hordijk, K. Y. Scheper, and G. C. De Croon, “Vertical landing for
micro air vehicles using event-based optical flow,” J. Field. Robot., vol. 35,
tem for static and dynamic objects are object of future research.
no. 1, pp. 69–90, 2018.
[19] A. G. Eguíluz, J. P. Rodríguez-Gómez, J. R. Martínez-de Dios, and
ACKNOWLEDGMENT A. Ollero, “Asynchronous event-based line tracking for time-to-contact
manuevers in UAS,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst.,
The authors would like to thank Jesus Tormo for his help with 2020, pp. 5978–5985.
the robot electronics, and Angela Romero and Francisca Lobera [20] J. Delmerico, T. Cieslewski, H. Rebecq, M. Faessler, and D. Scara-
muzza, “Are we ready for autonomous drone racing? The UZH-FPV
Corsetti for their support on the validation experiments. drone racing dataset,” in Proc. IEEE Int. Conf. Robot. Automat., 2019,
pp. 6713–6719.
REFERENCES [21] S. Sun, G. Cioffi, C. de Visser, and D. Scaramuzza, “Autonomous
quadrotor flight despite rotor failure with onboard vision sensors: Frames
[1] R. Zufferey et al., “Design of the high-payload flapping wing robot E-flap,” vs. events,” IEEE Robot. Automat. Lett., vol. 6, no. 2, pp. 580–587,
IEEE Robot. Automat. Lett., vol. 6, no. 2, pp. 3097–3104, Apr. 2021. Apr. 2021.
[2] G. de Croon, “Flapping wing drones show off their skills,” Sci. Robot., [22] J. P. Rodríguez-Gómez, A. G. Eguíluz, J. R. Martínez-De Dios,
vol. 5, pp. 1–2, 2020. and A. Ollero, “Auto-tuned event-based perception scheme for intru-
[3] N. Haider, A. Shahzad, M. N. M. Qadri, and S. I. Ali Shah, sion monitoring with UAS,” IEEE Access, vol. 9, pp. 44840–44854,
“Recent progress in flapping wings for micro aerial vehicle ap- 2021.
plications,” Proc. Inst. Mech. Eng., Part C: J. Mech. Eng. Sci, [23] J. Rodríguez-Gómez et al., “The GRIFFIN perception dataset: Bridging
vol. 2, 2020, pp. 245–264. the gap between flapping-wing flight and robotic perception,” IEEE Robot.
[4] A. G. Eguíluz, J. P. Rodríguez-Gómez, J. Paneque, P. Grau, J. R. Martinez Automat. Lett., vol. 6, no. 2, pp. 1066–1073, Apr. 2021.
De-Dios, and A. Ollero, “Towards flapping wing robot visual perception: [24] A. G. Eguiluz et al., “Why fly blind? Event-based visual guidance for
Opportunities and challenges,” in Proc. IEEE Int. Workshop Res., Educ. ornithopter robot flight,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst.,
Develop. UAS, 2019, pp. 335–343. 2021, pp. 1935–1942.
[5] F. Maldonado, J. Acosta, J. Tormo-Barbero, P. Grau, M. Guzm, and [25] Z. Dengsheng and L. Guojun, “Segmentation of moving objects in image
A. Ollero, “Adaptive nonlinear control for perching of a bioinspired sequence: A review,” Circuits, Syst., Signal Process., vol. 20, no. 2,
ornithopter,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2020, pp. 143–183, 2001.
pp. 1385–1390. [26] R. Tapia, A. G. Eguíluz, J. Martínez-De Dios, and A. Ollero, “ASAP:
[6] E. F. Helbling and R. J. Wood, “A review of propulsion, power, and control Adaptive scheme for asynchronous processing of event-based vision al-
architectures for insect-scale flapping-wing vehicles,” Appl. Mech. Rev., gorithms,” in Proc. IEEE Int. Conf. Robot. Automat. Workshop Unconven-
vol. 70, no. 1, pp. 1–9, 2018. tional Sensors Robot., 2020.
[7] G. Gallego et al., “Event-based vision: A survey,” IEEE Trans. Pattern [27] A. Mitrokhin, C. Fermüller, C. Parameshwara, and Y. Aloimonos, “Event-
Anal. Mach. Intell., vol. 44, no. 1, pp. 154–180, Jan. 2022. based moving object detection and tracking,” in Proc. IEEE/RSJ Int. Conf.
[8] E. Mueggler, N. Baumli, F. Fontana, and D. Scaramuzza, “Towards evasive Intell. Robot. Syst., 2018, pp. 6895–6902.
maneuvers with quadrotors using dynamic vision sensors,” in Proc. Eur. [28] M. Liu and T. Delbruck, “Adaptive time-slice block-matching optical flow
Conf. Mobile Robots, 2015, pp. 1–8. algorithm for dynamic vision sensors,” in Proc. Brit. Mach. Vis. Conf.,
[9] N. J. Sanket et al., “Evdodgenet: Deep dynamic obstacle dodging with Sep. 2018, pp. 1–12.
event cameras,” in Proc. IEEE Int. Conf. Robot. Automat., 2020, pp. 10651– [29] E. Mueggler, C. Bartolozzi, and D. Scaramuzza, “Fast event-based corner
10657. detection,” in Proc. Brit. Mach. Vis. Conf., 2017, pp. 1–11.
[10] D. Falanga, K. Kleber, and D. Scaramuzza, “Dynamic obstacle avoid- [30] J. P. Rodríguez-Gómez, A. G. Eguíluz, J. R. Martínez-De Dios, and A.
ance for quadrotors with event cameras,” Sci. Robot., vol. 5, no. 40, Ollero, “Asynchronous event-based clustering and tracking for intrusion
pp. 1–14, 2020. monitoring in UAS,” in Proc. IEEE Int. Conf. Robot. Automat., 2020,
[11] S. Hrabar, “Reactive obstacle avoidance for rotorcraft UAVs,” in Proc. pp. 8518–8524.
IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2011, pp. 4967–4974. [31] S. Armanini, C. De Visser, G. Croon, and M. Mulder, “Time-varying model
[12] H. Oleynikova, D. Honegger, and M. Pollefeys, “Reactive avoidance using identification of flapping-wing vehicle dynamics using flight data,” J. Guid.
embedded stereo vision for MAV flight,” in Proc. IEEE Int. Conf. Robot. Control Dyn., vol. 39, pp. 526–541, Mar. 2016.
Automat., 2015 pp. 50–56. [32] M. Guzmán et al., “Design and comparison of tails for bird-scale flapping-
[13] D. Falanga, S. Kim, and D. Scaramuzza, “How fast is too fast? The role of wing robots,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2021,
perception latency in high-speed sense and avoid,” IEEE Robot. Automat. pp. 6335–6342.
Lett., vol. 4, no. 2, pp. 1884–1891, Apr. 2019.

horized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on August 30,2022 at 10:11:52 UTC from IEEE Xplore. Restrictions ap

You might also like