Future Generation Computer Systems: Mohammad F. Al-Sa'd Abdulla Al-Ali Amr Mohamed Tamer Khattab Aiman Erbad

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Future Generation Computer Systems 100 (2019) 86–97

Contents lists available at ScienceDirect

Future Generation Computer Systems


journal homepage: www.elsevier.com/locate/fgcs

RF-based drone detection and identification using deep learning


approaches: An initiative towards a large open source drone database

Mohammad F. Al-Sa’d a,b ,1 , Abdulla Al-Ali a , Amr Mohamed a , , Tamer Khattab c ,
Aiman Erbad a
a
Qatar University, Department of Computer Science and Engineering, Doha, Qatar
b
Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
c
Qatar University, Department of Electrical Engineering, Doha, Qatar

highlights

• RF-based drone detection is one of the most effective methods for drone detection.
• Collect, analyze, and record RF signals of different drones under different flight statuses.
• Design of three deep learning networks to detect and identify intruding drones.
• The developed RF database along with our implementations are publicly available.

article info a b s t r a c t

Article history: The omnipresence of unmanned aerial vehicles, or drones, among civilians can lead to technical,
Received 10 December 2018 security, and public safety issues that need to be addressed, regulated and prevented. Security
Received in revised form 12 April 2019 agencies are in continuous search for technologies and intelligent systems that are capable of detecting
Accepted 1 May 2019
drones. Unfortunately, breakthroughs in relevant technologies are hindered by the lack of open source
Available online 9 May 2019
databases for drone’s Radio Frequency (RF) signals, which are remotely sensed and stored to enable
Keywords: developing the most effective way for detecting and identifying these drones. This paper presents
UAV detection a stepping stone initiative towards the goal of building a database for the RF signals of various
Drone identification drones under different flight modes. We systematically collect, analyze, and record raw RF signals
Deep learning of different drones under different flight modes such as: off, on and connected, hovering, flying, and
Neural networks video recording. In addition, we design intelligent algorithms to detect and identify intruding drones
Machine learning using the developed RF database. Three deep neural networks (DNN) are used to detect the presence
of a drone, the presence of a drone and its type, and lastly, the presence of a drone, its type, and flight
mode. Performance of each DNN is validated through a 10-fold cross-validation process and evaluated
using various metrics. Classification results show a general decline in performance when increasing
the number of classes. Averaged accuracy has decreased from 99.7% for the first DNN (2-classes), to
84.5% for the second DNN (4-classes), and lastly, to 46.8% for the third DNN (10-classes). Nevertheless,
results of the designed methods confirm the feasibility of the developed drone RF database to be used
for detection and identification. The developed drone RF database along with our implementations are
made publicly available for students and researchers alike.
© 2019 Elsevier B.V. All rights reserved.

1. Introduction for various applications, such as traffic monitoring [1,2], weather


observation [3], disaster management [4], spraying of agricultural
Commercial unmanned aerial vehicles, or drones, are gaining chemicals [5], inspection of infrastructures [6], and fire detection
great popularity over the recent years, thanks to their lower and protection [7]. Drones are remotely controlled using wireless
cost, smaller size, lighter weight, higher capabilities, and advance- technologies such as Bluetooth, 4G and WiFi; hence, by using
ments in batteries and motors. This has rendered drones viable off-the-shelf upgrades, drones have become a modular solution.
The ubiquitous utility of drones can lead to technical, security,
∗ Corresponding author. and public safety issues that need to be addressed, regulated and
E-mail address: [email protected] (A. Mohamed). prevented, e.g. spying, transfer of illegal or dangerous goods, dis-
1 This work was done while Mohammad F. Al-Sa’d was with the Computer turbing electricity and telephone lines, and assault [8]. Therefore,
Science and Engineering Department, Qatar University, Doha, Qatar. regulating entities need technologies that are capable of detecting

https://doi.org/10.1016/j.future.2019.05.007
0167-739X/© 2019 Elsevier B.V. All rights reserved.
M.F. Al-Sa’d, A. Al-Ali, A. Mohamed et al. / Future Generation Computer Systems 100 (2019) 86–97 87

and identifying drones without prior assumption on their type or Moreover, an energy efficient system capable of detecting and
flight mode. disabling video feeds of WiFi-based drones was presented in [33].
Conventional methods for detecting and identifying intruding In [16], a passive cost-effective RF sensing drone detection system
drones, e.g. radars, vision and acoustics, are not solely reliable as was designed. In addition, drone detection based on RF sensing
they can be easily restrained [9,10]. Radio frequency (RF) sensing was proposed in [34]. Preliminary investigation of active/passive
combined with deep learning approaches promised a solution; RF approaches for the detection of drones was presented in [35].
however, it was hindered by the lack of databases for the RF sig- Furthermore, in [36,37], RF-based drone localization methods
nals of drones [11]. In this paper, we (1) build a novel open source were developed by DOA estimation and surveillance drones.
database for the RF signals of various drones under different flight Applicability of drone detection and identification methods
modes, and (2) test the developed database in a drone detection depends on requirements mandated by real-life scenarios. That
and identification system designed using deep neural networks. being said, we observed that methods other than RF sensing,
This work is a stepping stone towards a larger database built by cannot be solely reliable to detect or identify intruding drones.
a community of researchers to encompass the RF signals of many On one hand, radar, vision, and acoustic based methods can be
other drones. restrained in various ways such as: using stealth technology,
The rest of the paper is organized as follows: Section 2 is an changing the drone physical shape and rotors, using low noise
overview of related work. We present in Section 3 the system rotors, and by emitting natural sounds, e.g. bird chirps, or white
model and describe our methodologies to build and test the noise [9]. In addition, such methods require expensive equipment,
database. In Section 4, we present and discuss results of the drone e.g. high quality video cameras, that is not designed to detect
detection and identification system, and finally, we conclude in drones [9]. Moreover, WiFi-based methods are inherently limited
Section 5. as they cannot detect drones operated by other wireless tech-
nologies e.g. 4G, and they require knowledge of the drone’s WiFi
2. Related work parameters, e.g. protocol and channel number. On the other hand,
we found that RF sensing based methods for drone detection and
In this Section, we review current anti-drone systems and identification are adequate to be used in real-life scenarios [37].
discuss the need for open source drone databases. Moreover, Such methods are independent of the wireless technology uti-
we review state-of-the-art methods used to detect and identify lized by the drone, e.g. Bluetooth, 4G or WiFi, and are immune
intruding drones and discuss their applicability in real-life sce- to physical alterations and differences among drones. However,
narios. Finally, we review the role of deep learning techniques current methods are still not fully automated nor robust due to
the lack of large labeled databases for the drones RF signals. This
in anti-drone systems and discuss their feasibility to test the
has motivated us to build an open source database for the RF
developed RF database.
signals of various drones under different flight modes.
Anti-drone systems: several commercial and military anti-
Drone detection techniques: intelligent detection and iden-
drone systems have been discussed in the literature. A compre-
tification techniques have emerged vastly by the rise of data
hensive overview of various systems and their deployed tech-
driven algorithms, such as neural networks. Deep neural net-
nologies is presented in [8]. Challenges and open research issues
works (DNN) have shown surpassing results in various cognitive
have been discussed in which ‘‘Database Build-Up"; the need to
tasks such as speech recognition [38,39], object detection and
build up an increasing database of drone signatures, was empha-
identification [40], signal compression [41], and others in all fields
sized upon. In [10], state-of-the-art studies on drone surveillance
of science [42]. In [18], a deep belief network was utilized to clas-
have been surveyed and several anti-drone systems have been
sify the spectral correlation functions of three drones. Moreover,
discussed. Moreover, various ways to detect, track, and interdict
a convolutional neural network (CNN) was used to detect the
intruding drones have been reviewed in [11]. The authors have
presence of drones from CCTV videos in [43], from surveillance
concluded that accurate detection and tracking requires a com-
images in [44], from Doppler signatures in [45], and from audio
prehensive database of drone’s signatures, hence our work comes
Spectrograms in [46]. In addition, the utility of CNNs as object
as a stepping stone towards this goal. detectors for reconnaissance and surveillance using drones was
Drone detection methods: various methods to detect and proposed in [47]. Furthermore, reinforcement learning was used
identify intruding drones have been discussed in the literature in [48] to detect temperature anomalies in drone’s motors. DNNs
such as: radars [12], video surveillance [13], acoustic sensors [14], versatility in solving optimization problems was demonstrated in
WiFi sniffing [15] and RF sensing [16]. In [17], a light weight, other fields. For instance, in [49], it was used to detect known and
X-Band radar system was designed to detect drones using their unknown DDoS attacks; in [50], to detect and identify supply side
Doppler signatures. Furthermore, a radar sensor was proposed fraud in programmatic exchanges; in [51] to control the water
in [18] to automatically detect and classify three distinct drones level in a four-tank system; in [52,53] to solve various numerical
in a laboratory setting. Moreover, in [19], a drone detection problems; and finally, in [54], to solve person search and re-
method was introduced by exploiting 5G millimeter-wave de- identification problems. This has motivated us to utilize DNNs for
ployments as radars. In [20,21], computer vision object detection the design of a drone detection and identification system using
methods were used to detect drones in the vicinity of birds. In the developed RF database.
addition, a system to detect and identify drones from surveil-
lance videos was developed in [9,22]. In [23], acoustic drone 3. Methodology
detection and identification was performed using support vec-
tor machines. In addition, the same methodology was deployed In this Section, we present the system model that is used to
in [24] to classify drones by their emitted sounds. Furthermore, build up the drone RF database and to test its feasibility in a
in [25,26], drone detection and tracking was performed using drone detection and identification system. First, we discuss the
acoustic cameras and by direction of arrival (DOA) estimation subsystems and components of the model and summarize their
in [27]. Moreover, in [28,29], drone sound identification was requirements and roles. After that, we elaborate on the discussion
performed using correlation analysis. In [15,30], WiFi sniffing for each component and present the experimental setup to build
based drone detection was performed by statistically analyzing the drone RF database. Finally, we design a drone detection and
WiFi traffic for drone signatures. In addition, WiFi-based drone identification system using DNNs to test the feasibility of the
detection and disarming was conducted successfully in [31,32]. developed RF database in real-life applications.
88 M.F. Al-Sa’d, A. Al-Ali, A. Mohamed et al. / Future Generation Computer Systems 100 (2019) 86–97

3.1. System model

Fig. 1 demonstrates our system model that can be divided into


two subsystems; RF database development and drone detection
and identification, see subsystems A and B in Fig. 1 respectively.
The RF database development subsystem is comprised of the
following three components:

• Drones under analysis: various drones that vary in size, ca-


pability, price, and technology. Flight modes are controlled
by RF signals coming from and to the flight control module.
See elements 1–3 in Fig. 1. The main requirement for this
component is to use many different drones to produce a
large descriptive database for the drone’s RF signals.
• Flight control module: a mobile phone or a flight controller
that sends and receives RF commands to and from the
drones under analysis to change their flight mode. Control-
ling drones via mobile phones requires installing mobile
applications that can be downloaded from various stores.
See elements 4–6 in Fig. 1.
• RF sensing module: an RF receiver that intercepts the drone’s
communications with the flight control module. The re-
ceiver is connected to a laptop, via cable, that runs a pro-
gram responsible for fetching, processing and storing the
sensed RF data in a database. The requirement for this
component is to capture all unlicensed RF bands that drones
operate on without any prior assumption on its flight mode.
See elements 7–10 in Fig. 1.
The drone detection and identification subsystem is comprised of
the following two components: Fig. 1. System model comprised of the following subsystems: (A) RF database
development and (B) drone detection and identification. The system elements
• Signal transformation: it transforms the archived complex are as follows: (1) drones under analysis, (2) RF signal transmitted from the
drone to the flight module, (3) RF signal transmitted from the flight module to
RF signals to reveal latent information that can be learned
the drone, (4) flight controller, (5) mobile phone acting as a flight controller,
for efficient detection and identification. See element 11 in (6) mobile applications used to control various drones, (7) NI-USRP 2943R RF
Fig. 1. receiver to intercept the drone RF communications, (8) PCIe cable connecting
• Multi-class classification: it classifies the transformed RF the RF receiver with a laptop, (9) laptop acting as a processing unit for the
intercepted RF data, (10) archived RF signals of various drones under different
signals using deep neural networks to detect and identify
flight modes, (11) signal transformation to reveal latent information on the
intruding drones. See elements 12–13 in Fig. 1. The require- archived RF data, (12) multi-class classifier designed using DNNs, and (13) the
ment for this component is to be computationally light for system output showing the identity and flight mode of an intruding drone.
real-time deployment and operation.

Table 1
3.2. RF Database development Specifications of the drones under analysis. For more details, one can read the
full specifications in [57–59].
Drone Parrot Bebop Parrot AR Drone DJI Phantom 3
3.2.1. Drones under analysis
Different drones can manifest in different RF signals; which in Dimensions (cm) 38×33×3.6 61×61×12.7 52×49×29
Weight (g) 400 420 1216
return, can be exploited by intelligent systems for detection and
Battery capacity 1200 1000 4480
identification. The following is an initial list of the drones used to (mAh)
build our database: Max. range (m) 250 50 1000
Connectivity WiFi (2.4 GHz WiFi (2.4 GHz) WiFi (2.4 GHz
• Parrot Bebop, shown in Fig. 2(a). and 5 GHz) −2.483 GHz)
• Parrot AR Drone, demonstrated in Fig. 2(b). +RF (5.725 GHz
• DJI Phantom 3, illustrated in Fig. 2(c). −5.825 GHz)

These drones are commonly used in research and civilian applica-


tions as they vary in size, price, capability and technology [55,56].
Table 1 lists the drones main specifications. In this work, we ‘‘DJI Go" are free mobile applications developed to control the Be-
are limited by having only three drones; however, the developed bop, AR, and Phantom drones, respectively. Other applications can
open source database is meant to be expanded by researchers and be used; however, in this work, we utilized the official application
students using other types of drones. of each drone.

3.2.2. Flight control module 3.2.3. RF Sensing module


It consists of flight controllers, or mobile phones, that send It consists of RF receivers, to intercept the drone RF communi-
and receive RF commands to and from the drones under analysis cations with the flight control module, connected to laptops that
to alter their flight mode, see Fig. 3. Controlling the drones by are responsible for fetching, processing and storing the recorded
a mobile phone requires mobile applications that are specifically RF signals in a database. In this work, we assumed that all drones
developed for each drone. ‘‘FreeFlight Pro", ‘‘AR.FreeFlight", and use WiFi operated at 2.4 GHz. Hence, there are some minimal
M.F. Al-Sa’d, A. Al-Ali, A. Mohamed et al. / Future Generation Computer Systems 100 (2019) 86–97 89

Table 2
Specifications of the USRP-2943 40 MHz RF receivers [61].
Number of channels 2
Frequency range 1.2 GHz – 6 GHz
Frequency step < 1 KHz
Grain range 0 dB to 37.5 dB
Maximum instantaneous bandwidth 40 MHz
Maximum I/Q sample rate 200 MS/s
ADC resolution 14 bit

(a) Parrot Bebop drone [57].

Fig. 5. Front panel of the LabVIEW programs installed on the laptops to capture
the drones RF communications. The ‘‘Band" option is selected as ‘‘Low" for the
first laptop and ‘‘High" for the second laptop. This can be used to recreate the
developed LabVIEW programs from scratch; however, one can simply download
(b) Parrot AR 2.0 elite edition drone [58]. them from our database website in [64].

assumptions. Nevertheless, one can determine the drone operat-


ing frequency using various methods such as passive frequency
scanning.
First, raw RF samples are acquired using two National In-
struments USRP-2943 (NI-USRP) software defined radio reconfig-
urable devices, shown in Fig. 4(a). Table 2 lists the NI-USRP RF
receivers specifications. Since each RF receiver has a maximum
instantaneous bandwidth of 40 MHz, both receivers must be op-
(c) DJI Phantom 3 standard drone [59]. erated simultaneously to at least capture a technology spectrum
such as WiFi (i.e. 80 MHz2 ) where the first receiver captures the
lower half of the frequency band, and the second, records the
Fig. 2. Three drones used to build the drone RF database.
upper half. After that, captured RF data is transferred from the NI-
USRP receivers to two standard laptops via Peripheral Component
Interconnect Express (PCIe) interface kits, as shown in Fig. 4(b)
Finally, data fetching, processing and storing are performed by
programs we designed in LabVIEW Communications System De-
sign Suite [63]. The programs are designed in a standard LabVIEW
manner using front panel and block diagram environments. As
demonstrated in Fig. 5, by using the front panel, one can alter the
captured band; lower half or upper half of the RF spectrum, car-
rier frequency, IQ rate, number of samples per segment, gain, and
activate a specific channel of the NI-USRP receiver. In addition,
one can select different flight modes and experiments to build
a comprehensive database. One can download the developed
(a) Drone controller for the DJI (b) FS-TH9X general radio con-
Phantom 3 drone [59]. troller for multicopters [60].
LabVIEW programs from our database website in [64].

3.2.4. RF database
Fig. 3. Various drone radio controllers. RF-based drone detection and identification applications re-
quire a comprehensive database of RF signals to be used for
training and testing. The database must contain RF background
activities; when drones are absent, and RF drone activities; when
drones are present, to be used for drone detection. In addition,
it must encompass the RF signals of different drones operating
under different flight modes to be used for drone identification
purposes and to determine the flight mode of intruding drones.

2 The true bandwidth of 2.4 GHz WiFi is 94 MHz plus 3 MHz as guard
bands at the beginning and end. However for simplicity, we will not capture
(a) NI USRP-2943R RF receiver [61]. (b) PCIe interface kit [62]. the last channel, channel 14, and the first and last 1 MHz of the remaining
spectrum as they contain negligible information. Note that to acquire the entire
WiFi spectrum using a single receiver, different USRP with a larger bandwidth
Fig. 4. Elements of the RF module to intercept the drones RF signals.
is needed.
90 M.F. Al-Sa’d, A. Al-Ali, A. Mohamed et al. / Future Generation Computer Systems 100 (2019) 86–97

the following branches to train and assess the drone detection


system:

• Drones are off; RF background activities are recorded.


• Drones are on; drones RF activities are recorded.
The second level includes experiments that are conducted on the
three drones under analysis: Bebop, AR, and the Phantom drones,
to train and assess the drone identification system.
Finally, the third level expands its predecessor by explicitly
controlling the flight mode of each drone under analysis as fol-
Fig. 6. Experimental setup for the RF database development subsystem, sub- lows to assess the identification system ability in determining the
system A in Fig. 1, using the Bebop drone. The Bebop drone is shown on the flight mode of intruding drones.
middle, the NI-USRP RF receivers are shown on the right and are connected to
the laptops, shown on the left, via the PCIe connectors. • On and connected to the controller.
• Hovering automatically with no physical intervention nor
control commands from the controller. Hovering altitude is
determined by the drone manufacturer (approximately one
meter).
• Flying without video recording. Note that the drone must
not hit any obstacles in this experiment to avoid warning
signals.
• Flying with video recording. Note that the drone must not
hit any obstacles in this experiment to avoid warning sig-
nals.
The former experiments are conducted by following the steps
summarized in Section 3.2.4.1.
3.2.4.3. Labeling.
A Binary Unique Identifier (BUI) is used to label the RF database
entries according to the conducted experiment, drone type, and
Fig. 7. Experiments to record drones RF signatures organized in a tree manner its specific flight mode, see Fig. 7. The BUI is comprised of two
consisting of three levels. The horizontal dashed red lines define the levels. BUI
is a Binary Unique Identifier for each component to be used in labeling. Note
binary numbers concatenated such that: BUI = [msBUI, lsBUI].
that the BUI for background activities is always filled with zeros. msBUI is the most significant part of the BUI representing the
experiment and drone type, levels one and two, while lsBUI is
the least significant part of the BUI representing the drone flight
3.2.4.1. Experimental setup. mode, third level. The BUI length L is determined using the total
Fig. 6 illustrates the experimental setup for the RF database number of experiments E, the total number of drones D, and the
development subsystem, subsystem A in Fig. 1, to be used for total number of flight modes F as follows:
populating the database with the required RF signatures. To con- L = ⌈log2 (E)⌉ + ⌈log2 (D)⌉ + ⌈log2 (F )⌉ , (1)
duct any experiment using this setup, one must perform the
following tasks carefully and sequentially. If you are recording RF where ⌈ . ⌉ is the ceiling operator and in this work, E = 2, D = 3
background activities, perform tasks 4–7. and F = 4; therefore, L = 5. Extending the developed database
using other experiments, drones, or flight modes can be easily
1. Turn on the drone under analysis and connect to it using a done by increasing E, D, or F , respectively. One can always add
mobile phone or a flight controller. zeros to the left of the BUI parts to extend the database labeling.
2. In case the utility of a mobile phone as a controller, start For instance, if the current database is extended using E = 4,
the mobile application to control the drone and to change D = 5 and F = 9, a previously developed BUI = 10111, will
its flight mode. become 010010011.
3. Check the drone connectivity and operation by performing
simple takeoff, hovering, and landing tests. 3.2.4.4. Database format.
4. Turn on the RF receivers to intercept all RF activities and Captured RF signals are stored as segments, to avoid memory
to transfer those to the laptops via the PCIe connectors. overflows, using a standard comma-separated values (csv) for-
5. Open the LabVIEW programs, installed on the laptops, and mat. This makes the drone RF database easy to load and interpret
select appropriate parameters depending on your experi- on any preferred software. Metadata for each segment in the
ment and requirements. database is included within its filename. It contains the segment
6. Start the LabVIEW programs to fetch, process and store RF BUI, followed by the selected RF frequency band; to determine if
data segments. it is the first or second half of the RF spectrum, and its segment
7. Stop the LabVIEW programs when you are done with the number. For instance, the third segment of the second half of the
experiment. RF spectrum with BUI = 11010, phantom drone with flight mode
8. For a different flight mode, go back to step 6, and for number 3, will have the following file name: ‘‘11010H_3.csv".
different drones go back to step 1.
3.3. Drone detection and identification
3.2.4.2. Experiments.
The RF drone database is populated with the required signa- The developed drone RF database is used to train and test deep
tures by conducting experiments organized in a tree manner with neural networks to assess the database feasibility to be used for
three levels as demonstrated in Fig. 7. The first level consists of drone detection and identification.
M.F. Al-Sa’d, A. Al-Ali, A. Mohamed et al. / Future Generation Computer Systems 100 (2019) 86–97 91

3.3.1. Signal transformation


It is performed to reveal latent information on the archived RF
signals that can be learned for efficient detection and identifica-
tion (see component 11 in Fig. 1). First, since we are using two
NI-USRP RF receivers that are not operated in MIMO mode3 , we
compute the discrete Fourier transform (DFT) of each recorded
segment coming from both receivers as follows:
 N )
−j2π m(n − 1) 
∑ (
(L) (L)
yi (m)= xi (n) exp , (2)
 
 N 
n=
 N1
)
−j2π m(n − 1) 
∑ (
(H) (H)
yi (m) =  xi (n) exp , (3)
 
 N 
n=1
(L)
where xi is the ith RF segment coming from the first RF receiver
(H) Fig. 8. Deep neural network with L − 1 hidden layers. The input layer, on the
that captures the lower half of the RF spectrum, xi is the ith
left, is outlined with a dashed rounded red rectangle; the hidden layers, on the
RF segment coming from the second RF receiver that captures middle, are identified by blue solid rectangles; and lastly, the output layer, on
(L) (H)
the upper half of the RF spectrum, yi and yi are the spectra of the right, is determined by a dashed rounded green rectangle.
the ith segments coming from the first and second RF receivers
respectively, n and m are the time and frequency domain indices,
N is the total number of time samples in the RF segment i, (L)
is the spectrum of the RF segment i; zi = di is the classification
and ∥ . ∥ is the magnitude operator used to compute the power
(L) (H) vector for the RF segment i; W (l) is the weight matrix of layer l;
spectrum. Note that, yi and yi solely hold the positive spectra (l)
wpq is the weight between the pth neuron of layer l and the qth
(L) (H)
of xi and xi to ensure non-redundant and concise spectral [
(l) (l) (l)
]T
projections. After that we concatenate the transformed signals of neuron of layer l − 1; b(l) = b1 , b2 , . . . , bH (l) is the bias vector
both receivers to build the entire RF spectrum i.e.: of layer l; f (l) is the activation function of layer l; l = 1, 2, . . . , L;
L − 1 is the total number of hidden layers; H (l) is the total number
[ ]
(L) (H)
yi = yi , c yi , (4) of neurons in layer l; H (0) = M; H (L) = C ; and C is the number of
∑Q (L) classes in the classification vector, di [65]. Note that f can be any
q=0 yi (M − q)
c= ∑Q (H)
, (5) linear or non-linear function; however, the rectified linear unit
q=0 yi (q)
(ReLU) and the sigmoid functions, expressed in Eq. (8) and Eq. (9)
respectively, are typical selections that have shown promising
where c is a normalization factor calculated as the ratio between
(L) results [66].
the last Q samples of the lower spectra, yi , and the first Q
x>0
(H)
{
samples of the upper spectra, yi , and M is the total number of x
f (x) = . (8)
frequency bins in yi . The normalization factor c, ensures spectral 0 x≤0
continuity between the two half’s of the RF spectrum as they 1
were captured using different devices; hence, a spectral bias is f (x) = . (9)
1 + e−x
inevitable. Note that Q must be relatively small to successfully
stitch the two spectra and large enough to average out any The weights and biases of the DNN are determined through
random fluctuations, e.g. Q = 10 for M = 2048. a supervised learning process that minimizes the classification
error [67]. The minimization is performed by a Gradient de-
3.3.2. Multi-class classification scent algorithm where the gradient is computed through back-
Detection and identification of intruding drones is performed propagation [65,67]. The classification error of the system is mod-
by a multi-class classifier designed using deep neural networks eled by the mean square error such that:
(DNN). The system must be able to detect drones and to dif- C
ferentiate between the RF spectra of various drones under dif-
( ) 1 ∑( )2
L di , d̂i = di (c) − d̂i (c) , (10)
ferent flight modes. A DNN consists of an input layer, hidden C
c =1
layers, and an output layer as shown in Fig. 8. One can formu-
late the input–output relationship of a DNN using the following where d̂i and di are the estimated and true classification vectors
expressions [65]: of the RF segment i, respectively, and C = D F is the total number
(l)
(
(l−1)
) of classes, see Section 3.2.4.3.
zi = f (l)
W (l) zi + b(l) , (6) In this work, three DNNs are trained and tested using the
(l) (l) (l) developed RF database to perform the following tasks: detect the
w11 w12 w1H
⎡ ⎤
··· (l−1) presence of a drone, detect the presence of a drone and identify
(l) (l) (l)
⎢ w21 w22 w2H its type, and lastly, detect the presence of a drone, identify its
⎢ ⎥
··· (l−1)
⎥,
(l)

W =⎢
⎢ .. .. .. .. (7) type, and determine its flight mode.
⎣ . . . .


wH(l)(l) 1 wH(l)(l) 2 ··· wH(l)(l) H (l−1) 3.3.3. Cross-validation
Estimating the performance of the RF-based drone detection
(l−1)
where zi is the output of layer l − 1 and the input to layer l; and identification system is conducted using stratified K -fold
(l) (0)
zi is the output of layer l and the input to layer l + 1; zi = yi cross-validation; an iterative process that repeats for K times to
produce performance estimates with low bias and low variance
3 Utilizing two NI-USRP receivers in Multiple Inputs Multiple Outputs (MIMO) regardless of the size difference between classes [68].
mode ensures time domain synchrony between the two acquired signal; thus,
First, the drone RF database is randomly segmented into K
time domain summation can be performed. However, in this work, this is not disjoint folds with balanced number of instances of each class in
the case as receivers are operated independently. each fold [68]. After that, at an arbitrary iteration k, fold k is used
92 M.F. Al-Sa’d, A. Al-Ali, A. Mohamed et al. / Future Generation Computer Systems 100 (2019) 86–97

as testing data for the DNNs while the rest of the RF database
is used for training. This process is repeated K times such that
the DNNs are tested using the entire RF database [69]. Finally,
performance of the system is estimated by the average perfor-
mance of all iterations resulting from the K -fold cross-validation
procedure [69].

3.3.4. Performance evaluation


Average performance of the RF-based drone detection and
identification system is presented using accuracy, precision, re-
call, error, false discovery rate (FDR), false negative rate (FNR) and
F1 scores via confusion matrices. These performance metrics are
defined as follows:
TP + TN
accuracy = , (11)
TP + TN + FP + FN
TP
precision = , (12)
TP + FP
TP
recall = , (13)
Fig. 9. An example of a confusion matrix computed to evaluate the performance
TP + FN
of a 3-class classifier. The gray right column shows precision and FDR, the gray
error = 1 − accuracy , (14) bottom row demonstrates recall and FNR, the yellow upper row and left column
show the F1 score for predicting each class, the blue cell in the bottom right of
the plot shows the overall accuracy and error, and finally, the orange cell in the
FDR = 1 − precision , (15) upper left of the plot depicts the classifier averaged F1 score. (For interpretation
of the references to color in this figure legend, the reader is referred to the web
FNR = 1 − recall , (16) version of this article.)
( )
precision × recall Table 3
F1 score = 2 , (17)
LabVIEW selected settings and parameters to record the drones RF
precision + recall
communications to populate the developed database.
where TP, TN, FP and FN are true positives, true negatives, false Settings and parameters Laptop 1 Laptop 2
positives, and false negatives, respectively. NI-USRP device RIO0 RIO0
Fig. 9 shows an example confusion matrix for a 3-class classi- Active channel RX2 RX2
fication problem where the rows and columns of the inner 3 × 3 RF band L H
matrix correspond to the predicted and true classes respectively. Carrier frequency (MHz) 2422 2462
IQ rate (MHz) 40 40
The diagonal cells, highlighted in green, represent correctly classi-
Number of samples per segment 107 107
fied segments, while off-diagonal cells, highlighted in red, depict Gain (dB) 30 30
incorrectly classified segments. The number of segments and the
percentage of the total number of segments are shown in each
cell in bold. The gray column on the far right illustrates the
precision in green, and FDR of the system in red. Furthermore, encompassing various RF signatures. In addition, we further seg-
the gray row at the bottom demonstrates the recall in green, and mented our database by a factor of 100 to increase the number
FNR of the system in red. In addition, the blue cell in the bottom of segments for better learning and to ensure an instantaneous
right of the plot shows the overall accuracy in green, and error representation of the RF signal (N = 105 ).
in red. Moreover, the yellow column and row on the far left and Signal transformation of each archived RF segment is per-
top show the F1 scores for predicting each class in green and its formed using MATLAB FFT function with 2048 frequency bins
complementary in red, (1 − F1 score), for completeness. Finally, (M = 2048). The full RF spectrum is constructed from its two
(L) (H)
the orange cell in the upper left of the plot shows the averaged half’s, yi and yi , using Eq. (4) with 10 returning points to
F1 score for all classes in green and its complementary in red. ensure spectral continuity (Q = 10). This results in 46,489,600
RF samples to be used in the drone detection and identification
4. Results and discussions system. Note that, the FFT is performed on zero-mean signals that
are computed by a de-trending process to remove zero-frequency
In this Section, we first present the experimental settings components.
and preprocessing utilized in this work to develop the drone Three DNNs are designed in Python using Keras to perform
RF database and the RF-based drone detection and identification the following tasks: detect the presence of a drone, detect the
system. After that, we present snippets from the developed RF presence of a drone and identify its type, and lastly, detect the
database and analyze its spectral information for different drones presence of a drone, identify its type, and determine its flight
under different flight modes. Finally, we present and discuss mode. Each DNN is trained by an Adam optimizer to minimize the
results of the RF-based drone detection and identification system. classification mean square error, see Eq. (10), using the following
parameters: 3 hidden fully-connected layers (L − 1 = 3), 256,
4.1. Settings and preprocessing 128 and 64 total number of neurons at the first, second and third
hidden layers respectively (H (1) = 256, H (2) = 128, H (3) = 64),
The LabVIEW programs, installed on both laptops, are operated total number of epochs is 200, batch size is 10, and lastly, f
with settings and parameters that are summarized in Table 3. We is the ReLU function for the hidden layers, see Eq. (8), and the
recorded 10.25 s of RF background activities and approximately sigmoid function for the output layer, see Eq. (9). The classifica-
5.25 s of RF drone communications for each flight mode. This tion performance of each network is validated using a stratified
has produced a drone RF database with over 40 GB of data 10-fold cross-validation process (K = 10) and evaluated using
M.F. Al-Sa’d, A. Al-Ali, A. Mohamed et al. / Future Generation Computer Systems 100 (2019) 86–97 93

Table 4 4.3. Drone detection and identification


Details of the developed drone RF database showing the number of raw samples
and segments for each class at each experiment level. Note that the total number
of samples is divided equally between the recordings coming from the first and
Performance evaluation of the three developed DNNs is shown
second RF receivers (x(L) and x(H) ). For more details, see Fig. 7 and Section 3.2.4.2. in Fig. 12 using confusion matrices. See Section 3.3.4 for more
Level Class Segments Samples Ratio (%) details on how to interpret a confusion matrix. First, Fig. 12(a)
Drone 186 3720×106 81.94%
shows the classification performance of the first DNN which
1 detects the presence of a drone. Results demonstrate an average
No Drone 41 820×106 18.06%
Bebop 84 1680×106 37.00%
accuracy of 99.7%, average error of 0.3%, and average F1 score of
AR 81 1620×106 35.68% 99.5%. Moreover, Fig. 12(b) depicts the classification performance
2
Phantom 21 420×106 9.25% of the second DNN which detects the presence of a drone and
No Drone 41 820×106 18.06% identifies its type. Results demonstrate an average accuracy of
Bebop mode 1 21 420×106 9.25% 84.5%, average error of 15.5%, and average F1 score of 78.8%.
Bebop mode 2 21 420×106 9.25% Finally, Fig. 12(c) illustrates the classification performance of the
Bebop mode 3 21 420×106 9.25% third DNN which detects the presence of a drone, identifies its
Bebop mode 4 21 420×106 9.25%
type, and determines its flight mode. Results demonstrate an
3 AR mode 1 21 420×106 9.25%
AR mode 2 21 420×106 9.25% average accuracy of 46.8%, average error of 53.2%, and average F1
AR mode 3 21 420×106 9.25% score of 43%. Generally, one can observe a decline in performance
AR mode 4 18 360×106 7.93% when increasing the number of classes. This can be explained
Phantom mode 1 21 420×106 9.25% by the similarities of RF communications of the Bebop and AR
No Drone 41 820×106 18.06%
drones, see Fig. 11. The recall when detecting background and
Phantom RF signatures remained high for the second and third
DNNs, 96.1% and 97.4% (see Eq. (13) and the right columns of
confusion matrices, see Sections 3.3.3 and 3.3.4. One must note the confusion matrices in Fig. 12). However, detecting the Bebop
that, better classification results can be achieved using different and AR drones or identifying their flight modes is almost random.
multi-class classifiers, deeper neural networks, and/or different The former observations are aligned with the analysis presented
hyper parameters; however, in this work, we are only testing the in Section 4.2. Nevertheless, results of the developed system still
RF database feasibility to be used for drone detection and identi- demonstrate the feasibility of the developed drone RF database
fication. Therefore, achieving highest performance is beyond the to be used for detection and identification.
scope of this paper.
5. Conclusions
4.2. Analysis of the drone RF database
As drones are becoming more popular among civilians, reg-
Table 4 illustrates the total number of segments and samples ulating entities demand intelligent systems that are capable of
for the recordings in the developed drone RF database at each detecting and identifying intruding drones. However, the design
experiment level. One can note a class imbalance problem due to of such systems is hindered by the lack of large labeled open
the different sample sizes for different classes, hence we will be source databases. This work is a contribution towards this goal by
using stratified cross-validation to assess the drone detection and developing a database of drones Radio Frequency (RF) communi-
identification system. Fig. 10 shows snippets of raw recordings cations that can be further extended by researchers and students.
from the developed RF database. One can observe that drone The developed database encompasses RF signals of various drones
RF communications can be fully captured using one RF receiver under different flight modes; therefore, it can be used to test
(x(H) amplitude is lower than x(L) in Fig. 10(b)). Nevertheless, and validate intelligent algorithms, and can be adopted to design
one cannot make such assumption as drones can automatically drone detection and identification systems.
or intentionally change their operating channel and the utilized We have collected, analyzed, and recorded raw RF signals of
wireless technology.
different drones under different flight modes such as: off, on
Raw RF segments are transformed by DFT to reveal latent
and connected, hovering, flying, and video recording. After that,
information that can be learned for efficient detection and iden-
to test the feasibility of the developed database, we used deep
tification. Fig. 11 demonstrates spectral and statistical analysis of
neural networks (DNNs) to detect and identify intruding drones
the acquired RF data. Subfigures (a–c) show the average spectra
and to determine their flight mode. We designed, validated, and
of the RF signals that are supplied to the first, second and third
evaluated three DNNs to perform the following tasks: detect the
DNNs respectively. In addition, subfigures (d–f) illustrate the
presence of a drone, detect the presence of a drone and identify
statistical distribution of the average spectra in subfigures (a–c)
using boxplots. One can note that by using Fig. 11(a), detecting its type, and lastly, detect the presence of a drone, identify its
the presence of a drone can be performed effectively by the first type, and determine its flight mode.
DNN as the two spectra show obvious differences that can be Results of the developed systems showed a general decline
verified by the boxplots in Fig. 11(d). Furthermore, in Fig. 11(b), in performance when increasing the number of classes. Average
one can observe the alikeness among the Bebop and AR RF signals accuracy has decreased from 99.7% for the first DNN (2-classes),
and their different morphology when compared to the Phantom to 84.5% for the second DNN (4-classes), and lastly, to 46.8%
drone or RF background activities. Such similarities can hinder for the third DNN (10-classes). This decrease was shown to be
the second DNN from accurately differentiating these drones as caused by similarities observed on some drones RF spectra as
confirmed by the boxplots in Fig. 11(e). Lastly, by using Fig. 11(c), they were manufactured by the same company, e.g. the Bebop
the previous observation can be formally stated as follows: Be- and AR drones. This introduces a challenging obstacle that can
bop and AR drones have similar RF communications since they be mitigated using deeper neural networks or by other advanced
produce similar spectra for different flight modes. This is logical, classification algorithms. Nevertheless, results of the developed
as both drones are manufactured by the same company, Parrot. drone detection and identification system demonstrate the fea-
Therefore, detecting the flight modes of these two drones present sibility of the developed database to be used for testing and
difficulties for any intelligent system, see Fig. 11(f) for statistical validating intelligent algorithms and to design advanced drone
verification. detection and identification systems. The developed drone RF
94 M.F. Al-Sa’d, A. Al-Ali, A. Mohamed et al. / Future Generation Computer Systems 100 (2019) 86–97

Fig. 10. Snippets from the developed drone RF database. x(L) and x(H) are plotted in blue and red respectively with normalized amplitudes from −1 to 1. Fig. 10(a)
shows segment number 5 of the acquired RF background activities, Fig. 10(b) shows segment number 10 of the acquired Bebop RF signals when flying and video
recording, and lastly, Fig. 10(c) shows segment number 7 of the acquired Phantom RF signals when on and connected. (For interpretation of the references to color
in this figure legend, the reader is referred to the web version of this article.)

Fig. 11. Spectral and statistical analysis of the acquired RF signals to be supplied for three drone detection and identification DNNs. Figs. (a–c) show the average
power spectra of the acquired RF signals while Figs. (d–f) show the boxplot of the computed spectra. Note that amplitudes of the average spectra are normalized
to discard biases in the analysis and that they are smoothed using a 10-point moving average filter to ease visual interpretations. In Fig. 11(a), class 1 is for RF
background activities and class 2 is for the drones RF communications (to be supplied to the first DNN). In Fig. 11(b), class 1 is for RF background activities and
classes 2–4 are for the Bebop, AR and Phantom drones (to be supplied to the second DNN). In Fig. 11(c), class 1 is for RF background activities, classes 2–5 are for
the Bebop 4 different flight modes, classes 6–9 are for the AR 4 different flight modes, and lastly, class 10 is for the Phantom single flight mode (to be supplied to
the third DNN).

database is open source and can be found in [64] along with and network architectures to systematically converge to the best
all the implementations required to reproduce the results of this detection and identification system. Furthermore, fusing the de-
work. veloped database with other drone detection modalities such as
In the future, one can extract features from the developed camera images and videos, radar echoes, and acoustic recordings,
drone RF database to be used for detection and compare their can ameliorate the performance of the detection and identi-
results with the outcomes of our system. In addition, the devel- fication system by exploiting the strengths of each modality.
oped database can be used to train and test different detectors The developed database can be extended by researchers and
M.F. Al-Sa’d, A. Al-Ali, A. Mohamed et al. / Future Generation Computer Systems 100 (2019) 86–97 95

Fig. 12. Average classification performance for the three designed DNNs using confusion matrices. In Fig. 12(a), class 1 is for RF background activities and class 2
is for the drones RF communications. In Fig. 12(b), class 1 is for RF background activities and classes 2–4 are for the Bebop, AR and Phantom drones. In Fig. 12(c),
class 1 is for RF background activities, classes 2–5 are for the Bebop 4 different flight modes, classes 6–9 are for the AR 4 different flight modes, and lastly, class 10
is for the Phantom single flight mode.

students alike in various ways such as: (1) investigating other [4] M. Erdelj, E. Natalizio, K.R. Chowdhury, I.F. Akyildiz, Help from the Sky:
classification algorithms, (2) expanding the developed database Leveraging UAVs for disaster management, IEEE Pervasive Comput. 16 (1)
(2017) 24–32, http://dx.doi.org/10.1109/MPRV.2017.11.
by augmentation, e.g. adding channel fading or noise, (3) perform
[5] Y. Huang, S.J. Thomson, W.C. Hoffmann, Y. Lan, B.K. Fritz, Development
the same experiments using other drones, (4) study the effects of and prospect of unmanned aerial vehicle technologies for agricultural
RF interference and noise when detecting and identifying drones, production management, Int. J. Agric. Biol. Eng. 6 (3) (2013) 1–10, http:
(5) conduct experiments for indoor and outdoor flying, (6) vary //dx.doi.org/10.3965/j.ijabe.20130603.001.
the drone speed and distance from the RF receiver, and many [6] Y. Ham, K.K. Han, J.J. Lin, M. Golparvar-Fard, Visual monitoring of civil
infrastructure systems via camera-equipped Unmanned Aerial Vehicles
others.
(UAVs): A review of related works, Vis. Eng. 4 (1) (2016) 1, http://dx.
doi.org/10.1186/s40327-015-0029-z.
Acknowledgments [7] H. Cruz, M. Eckert, J. Meneses, J.-F. Martínez, Efficient forest fire detection
index for application in unmanned aerial systems (UASs), Sensors 16 (6)
(2016) 893, http://dx.doi.org/10.3390/s16060893.
This publication was supported by Qatar university Internal [8] X. Shi, C. Yang, W. Xie, C. Liang, Z. Shi, J. Chen, Anti-drone system
Grant No. QUCP-CENG-2018/2019-1. The work of Aiman Erbad with multiple surveillance technologies: Architecture, implementation, and
is supported by grant number NPRP 7-1469-1-273. The findings challenges, IEEE Commun. Mag. 56 (4) (2018) 68–74, http://dx.doi.org/10.
achieved herein are solely the responsibility of the authors. 1109/MCOM.2018.1700430.
[9] S.R. Ganti, Y. Kim, Implementation of detection and tracking mechanism
for small UAS, in: 2016 International Conference on Unmanned Aircraft
Declaration of competing interest Systems (ICUAS), 2016, pp. 1254–1260 http://dx.doi.org/10.1109/ICUAS.
2016.7502513.
No author associated with this paper has disclosed any po- [10] G. Ding, Q. Wu, L. Zhang, Y. Lin, T.A. Tsiftsis, Y. Yao, An amateur
drone surveillance system based on the cognitive internet of things, IEEE
tential or pertinent conflicts which may be perceived to have Commun. Mag. 56 (1) (2018) 29–35, http://dx.doi.org/10.1109/MCOM.2017.
impending conflict with this work. 1700452.
[11] İ. Güvenç, F. Koohifar, S. Singh, M.L. Sichitiu, D. Matolak, Detection,
References tracking, and interdiction for amateur drones, IEEE Commun. Mag. 56 (4)
(2018) 75–81, http://dx.doi.org/10.1109/MCOM.2018.1700455.
[12] G.C. Birch, J.C. Griffin, M.K. Erdman, UAS detection classification and
[1] J.Y.J. Chow, Dynamic UAV-based traffic monitoring under uncertainty as a neutralization: Market survey, Tech. Rep. SAND2015-6365 606150, San-
stochastic arc-inventory routing policy, Int. J. Transp. Sci. Technol. 5 (3) dia National Laboratories, United States, 2015, http://dx.doi.org/10.2172/
(2016) 167–185, http://dx.doi.org/10.1016/j.ijtst.2016.11.002. 1222445.
[2] F. Mohammed, A. Idries, N. Mohamed, J. Al-Jaroodi, I. Jawhar, UAVs [13] R.L. Sturdivant, E.K.P. Chong, Systems engineering baseline concept of a
for smart cities: Opportunities and challenges, in: 2014 International multispectral drone detection solution for airports, IEEE Access 5 (2017)
Conference on Unmanned Aircraft Systems, ICUAS, 2014, pp. 267–273 7123–7138, http://dx.doi.org/10.1109/ACCESS.2017.2697979.
http://dx.doi.org/10.1109/ICUAS.2014.6842265. [14] İ. Güvenç, O. Ozdemir, Y. Yapici, H. Mehrpouyan, D. Matolak, Detection,
[3] V.V. Klemas, Coastal and environmental remote sensing from unmanned localization, and tracking of unauthorized UAS and Jammers, in: 2017
aerial vehicles: An overview, J. Coast. Res. (2018) 1260–1267, http://dx.doi. IEEE/AIAA 36th Digital Avionics Systems Conference (DASC), 2017, pp. 1–10
org/10.2112/jcoastres-d-15-00005.1. http://dx.doi.org/10.1109/DASC.2017.8102043.
96 M.F. Al-Sa’d, A. Al-Ali, A. Mohamed et al. / Future Generation Computer Systems 100 (2019) 86–97

[15] I. Bisio, C. Garibotto, F. Lavagetto, A. Sciarrone, S. Zappatore, Blind detec- [35] P. Nguyen, M. Ravindranatha, A. Nguyen, R. Han, T. Vu, Investigating
tion: Advanced techniques for WiFi-based drone surveillance, IEEE Trans. cost-effective RF-based detection of drones, in: Proceedings of the 2nd
Veh. Technol. 68 (1) (2019) 938–946, http://dx.doi.org/10.1109/TVT.2018. Workshop on Micro Aerial Vehicle Networks, Systems, and Applications
2884767. for Civilian Use, DroNet ’16, ACM, New York, NY, USA, 2016, pp. 17–22,
[16] P. Nguyen, H. Truong, M. Ravindranathan, A. Nguyen, R. Han, T. Vu, http://dx.doi.org/10.1145/2935620.2935632.
Cost-effective and passive RF-based drone presence detection and char- [36] S. Abeywickrama, L. Jayasinghe, H. Fu, C. Yuen, RF-based direction finding
acterization, GetMobile: Mobile Comp. Comm. 21 (4) (2018) 30–34, http: of UAVs using DNN, arXiv preprint arXiv:1712.01154.
//dx.doi.org/10.1145/3191789.3191800. [37] M.M. Azari, H. Sallouha, A. Chiumento, S. Rajendran, E. Vinogradov, S.
[17] A. Moses, M.J. Rutherford, K.P. Valavanis, Radar-based detection and iden- Pollin, Key technologies and system trade-offs for detection and local-
tification for miniature air vehicles, in: 2011 IEEE International Conference ization of amateur drones, IEEE Commun. Mag. 56 (1) (2018) 51–57,
on Control Applications (CCA), 2011, pp. 933–940 http://dx.doi.org/10. http://dx.doi.org/10.1109/MCOM.2017.1700442.
1109/CCA.2011.6044363. [38] W. Chan, N. Jaitly, Q. Le, O. Vinyals, Listen, attend and spell: A neural
[18] G.J. Mendis, T. Randeny, J. Wei, A. Madanayake, Deep learning based network for large vocabulary conversational speech recognition, in: 2016
doppler radar for micro UAS detection and classification, in: MILCOM IEEE International Conference on Acoustics, Speech and Signal Processing
2016-2016 IEEE Military Communications Conference, pp. 924–929 http: (ICASSP), 2016, 4960–4964 http://dx.doi.org/10.1109/ICASSP.2016.7472621.
//dx.doi.org/10.1109/MILCOM.2016.7795448.
[39] A. Graves, A. Mohamed, G. Hinton, Speech recognition with deep recurrent
[19] D. Solomitckii, M. Gapeyenko, V. Semkin, S. Andreev, Y. Koucheryavy,
neural networks, in: 2013 IEEE International Conference on Acoustics,
Technologies for efficient amateur drone detection in 5G millimeter-wave
Speech and Signal Processing, 2013, pp. 6645–6649, http://dx.doi.org/10.
cellular infrastructure, IEEE Commun. Mag. 56 (1) (2018) 43–50, http:
1109/ICASSP.2013.6638947.
//dx.doi.org/10.1109/MCOM.2017.1700450.
[40] K. Kang, H. Li, J. Yan, X. Zeng, B. Yang, T. Xiao, C. Zhang, Z. Wang, R. Wang,
[20] M. Saqib, S.D. Khan, N. Sharma, M. Blumenstein, A study on detecting
X. Wang, W. Ouyang, T-CNN: Tubelets with convolutional neural networks
drones using deep convolutional neural networks, in: 2017 14th IEEE
for object detection from videos, IEEE Trans. Circuits Syst. Video Technol.
International Conference on Advanced Video and Signal Based Surveillance,
(2018) http://dx.doi.org/10.1109/TCSVT.2017.2736553, 1–1.
AVSS, 2017, pp. 1–5 http://dx.doi.org/10.1109/AVSS.2017.8078541.
[21] E. Unlu, E. Zenou, N. Riviere, Using shape descriptors for UAV detection, [41] A.B. Said, M.F. Al-Sa’d, M. Tlili, A.A. Abdellatif, A. Mohamed, T. Elfouly, K.
Electron. Imaging 2018 (9) (2018) 128–1–128–5, http://dx.doi.org/10.2352/ Harras, M.D. O’Connor, A deep learning approach for vital signs compres-
ISSN.2470-1173.2018.09.SRV-128. sion and energy efficient delivery in mhealth systems, IEEE Access 6 (2018)
33727–33739, http://dx.doi.org/10.1109/ACCESS.2018.2844308.
[22] M. Wu, W. Xie, X. Shi, P. Shao, Z. Shi, Real-time drone detection using
deep learning approach, in: L. Meng, Y. Zhang (Eds.), Machine Learning [42] Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (436) (2015)
and Intelligent Communications, Springer International Publishing, Cham, http://dx.doi.org/10.1038/nature14539.
2018, pp. 22–32, http://dx.doi.org/10.1007/978-3-030-00557-3_3. [43] N. Shijith, P. Poornachandran, V.G. Sujadevi, M.M. Dharmana, Breach detec-
[23] A. Bernardini, F. Mangiatordi, E. Pallotti, L. Capodiferro, Drone detection tion and mitigation of UAVs using deep neural network, in: 2017 Recent
by acoustic signature identification, Electron. Imaging 2017 (10) (2017) Developments in Control, Automation Power Engineering (RD–CAPE), 2017,
60–64, http://dx.doi.org/10.2352/ISSN.2470-1173.2017.10.IMAWM-168. pp. 360–365. http://dx.doi.org/10.1109/RDCAPE.2017.8358297.
[24] M. Nijim, N. Mantrawadi, Drone classification and identification system by [44] C. Aker, S. Kalkan, Using deep networks for drone detection, 2017
phenome analysis using data mining techniques, in: 2016 IEEE Symposium 14th IEEE International Conference on Advanced Video and Signal Based
on Technologies for Homeland Security (HST), 2016, pp. 1–5 http://dx.doi. Surveillance, AVSS http://dx.doi.org/10.1109/avss.2017.8078539.
org/10.1109/THS.2016.7568949. [45] B.K. Kim, H. Kang, S. Park, Drone classification using convolutional neural
[25] X. Chang, C. Yang, J. Wu, X. Shi, Z. Shi, A Surveillance System for Drone networks with merged doppler images, IEEE Geosci. Remote Sens. Lett. 14
Localization and Tracking Using Acoustic Arrays, in: 2018 IEEE 10th Sensor (1) (2017) 38–42, http://dx.doi.org/10.1109/LGRS.2016.2624820.
Array and Multichannel Signal Processing Workshop (SAM), 2018, pp. [46] H.C. Vemula, Multiple drone detection and acoustic scene classifi-
573–577 http://dx.doi.org/10.1109/SAM.2018.8448409. cation with deep learning, (Ph.D. Thesis), Wright State University,
[26] J. Busset, F. Perrodin, P. Wellig, B. Ott, K. Heutschi, T. Rühl, T. Nussbaumer, 2018, URL: http://etd.ohiolink.edu/pg_10?0::NO:10:P10_ACCESSION_NUM:
Detection and tracking of drones using advanced acoustic cameras, Proc. wright1547384408540764.
SPIE Unmanned/Unattended Sensors and Sensor Networks XI; and Ad- [47] J. Lee, J. Wang, D. Crandall, S. S̃abanović, G. Fox, Real-Time, Cloud-
vanced Free-Space Optical Communication Techniques and Applications Based Object Detection for Unmanned Aerial Vehicles, in: 2017 First IEEE
9647 (2015) pp. 9647–9647–8. http://dx.doi.org/10.1117/12.2194309. International Conference on Robotic Computing (IRC), 2017, pp. 36–43
[27] C. Yang, Z. Wu, X. Chang, X. Shi, J. Wo, Z. Shi, DOA Estimation Using http://dx.doi.org/10.1109/IRC.2017.77.
Amateur Drones Harmonic Acoustic Signals, in: 2018 IEEE 10th Sensor
[48] H. Lu, Y. Li, S. Mu, D. Wang, H. Kim, S. Serikawa, Motor anomaly
Array and Multichannel Signal Processing Workshop, SAM, 2018, pp.
detection for unmanned aerial vehicles using reinforcement learning, IEEE
587–591 http://dx.doi.org/10.1109/SAM.2018.8448797.
Internet Things J. 5 (4) (2018) 2315–2322, http://dx.doi.org/10.1109/JIOT.
[28] J. Mezei, A. Molnár, Drone sound detection by correlation, in: 2016 IEEE 2017.2737479.
11th International Symposium on Applied Computational Intelligence and
[49] A. Saied, R.E. Overill, T. Radzik, Detection of known and unknown DDoS
Informatics, SACI, 2016, pp. 509–518 http://dx.doi.org/10.1109/SACI.2016.
attacks using artificial neural networks, Neurocomputing 172 (2016)
7507430.
385–393, http://dx.doi.org/10.1016/j.neucom.2015.04.101.
[29] J. Mezei, V. Fiaska, A. Molnár, Drone sound detection, in: 2015 16th IEEE
[50] A. Badhe, Using neural networks to detect supply side fraud in pro-
International Symposium on Computational Intelligence and Informatics,
grammatic exchanges, Neural Netw. Mach. Learn. 1 (1) (2017) 1, URL:
CINTI, 2015, pp. 333–338 http://dx.doi.org/10.1109/CINTI.2015.7382945.
http://neuraldatasets.org/index.php/neuralnetworks/article/view/1.
[30] I. Bisio, C. Garibotto, F. Lavagetto, A. Sciarrone, S. Zappatore, Unauthorized
amateur UAV detection based on WiFi statistical fingerprint analysis, IEEE [51] J. Wang, K. Shi, Q. Huang, S. Zhong, D. Zhang, Stochastic switched sampled-
Commun. Mag. 56 (4) (2018) 106–111, http://dx.doi.org/10.1109/MCOM. data control for synchronization of delayed chaotic neural networks with
2018.1700340. packet dropout, Appl. Math. Comput. 335 (2018) 211–230, http://dx.doi.
[31] M. Peacock, M.N. Johnstone, Towards detection and control of civilian org/10.1016/j.amc.2018.04.038.
unmanned aerial vehicles, Proceedings of the 14th Australian Information [52] K. Shi, J. Wang, S. Zhong, X. Zhang, Y. Liu, J. Cheng, New reliable
Warfare Conference. http://dx.doi.org/10.4225/75/57a847dfbefb5. nonuniform sampling control for uncertain chaotic neural networks under
[32] L. Val Terrón, Design, development and assessment of techniques for Markov switching topologies, Appl. Math. Comput. 347 (2019) 169–193,
neutralizing drones, (Ph.D. Thesis), Galician Research and Development http://dx.doi.org/10.1016/j.amc.2018.11.011.
Center in Advanced Telecommunications, 2017, URL: http://castor.det. [53] K. Shi, Y. Tang, S. Zhong, C. Yin, X. Huang, W. Wang, Nonfragile
uvigo.es:8080/xmlui/handle/123456789/96. asynchronous control for uncertain chaotic Lurie network systems with
[33] A. Sun, W. Gong, R. Shea, J. Liu, X.S. Liu, Q. Wang, Drone privacy shield: A Bernoulli stochastic process, Internat. J. Robust Nonlinear Control 28 (5)
WiFi based defense, in: 2017 IEEE 28th Annual International Symposium (2018) 1693–1714, http://dx.doi.org/10.1002/rnc.3980.
on Personal, Indoor, Mobile Radio Commun., PIMRC 2017, pp. 1–5. http: [54] T. Xiao, S. Li, B. Wang, L. Lin, X. Wang, Joint detection and identi-
//dx.doi.org/10.1109/PIMRC.2017.8292780. fication feature learning for person search, in: Computer Vision and
[34] H. Zhang, C. Cao, L. Xu, T.A. Gulliver, A UAV detection algorithm based Pattern Recognition (CVPR), in: 2017 IEEE Conference on, IEEE, 2017, pp.
on an artificial neural network, IEEE Access 6 (2018) 24720–24728, http: 3376–3385, URL: http://openaccess.thecvf.com/content_cvpr_2017/html/
//dx.doi.org/10.1109/ACCESS.2018.2831911. Xiao_Joint_Detection_and_CVPR_2017_paper.html.
M.F. Al-Sa’d, A. Al-Ali, A. Mohamed et al. / Future Generation Computer Systems 100 (2019) 86–97 97

[55] E. Vattapparamban, İ. Güvenç, A.İ. Yurekli, K. Akkaya, S. Uluaǧaç, Drones Abdullah Al-Ali obtained his master’s degree in soft-
for smart cities: Issues in cybersecurity, privacy, and public safety, in: 2016 ware design engineering and Ph.D. degree in Computer
International Wireless Communications and Mobile Computing Confer- Engineering from Northeastern University in Boston,
ence (IWCMC), 2016, pp. 216–221. http://dx.doi.org/10.1109/IWCMC.2016. MA, USA in 2008 and 2014, respectively. He is an active
7577060. researcher in Cognitive Radios for smart cities and
[56] D. Skorupka, A. Duchaczek, A. Waniewska, M. Kowacka, Optimization of the vehicular ad-hoc networks (VANETs). He has published
several peer-reviewed papers in journals and confer-
choice of unmanned aerial vehicles used to monitor the implementation
ences. Dr. Abdulla is currently head of the Technology
of selected construction projects, AIP Conf. Proc. 1863 (1) (2017) 230013,
Innovation and Engineering Education (TIEE) at the
http://dx.doi.org/10.1063/1.4992398.
College of Engineering in Qatar University.
[57] Photopoint, Online (2018) URL: https://www.photopoint.ee/en/drones/
363208-parrot-bebop-drone-1-red?ship_to=QA.
[58] Amazon, Online (2018) URL: https://www.amazon.com/Parrot-AR-Drone- Amr Mohamed received his M.S. and Ph.D. in elec-
2-0-Elite-Quadcopter/dp/B00FS7SU7K. trical and computer engineering from the University
[59] DJI, Online (2018) URL: https://www.dji.com/phantom-3-standard. of British Columbia, Vancouver, Canada, in 2001, and
[60] ETech, FS-TH9X 24 GHz 9CH Transmitter, Online (2018) URL: https://www. 2006 respectively. His research interests include wire-
etechpk.net/shop/multicopter-accesories/fs-th9x-2-4ghz-9ch-transmitter/. less networking, edge computing, and security for IoT
[61] N. Instruments, USRP Software Defined Radio Reconfigurable Device, applications. Dr. Amr Mohamed has co-authored over
Online (2018) URL: https://www.ni.com/en-lb/support/model.usrp-2943. 160 refereed journal and conference papers, patents,
html. textbook, and book chapters in reputed international
[62] E. Research, ExpressCard PCIe Interface Kit (Laptop), Online (2018) URL: journals, and conferences. He is serving as a technical
https://www.ettus.com/product/details/ECARD-KIT. editor in two international journals and has been part
of the organizing committee of many international
[63] N. Instruments, LabVIEW Communications System Design Suite,
conferences as a symposia co-chair e.g. IEEE Globecom’16.
Online (2018) URL: https://www.ni.com/en-lb/shop/select/labview-
communications-system-design-suite.
[64] M.F. Al-Sa’d, Mhd Saria; Mohamed, A. Al-Ali, A. Mohamed, T. Khattab, Tamer Khattab received the B.Sc. and M.Sc. degrees
A. Erbad, DroneRF dataset: A dataset of drones for RF-based detection, from Cairo University, Giza, Egypt, and the Ph.D. degree
classification, and identification, 2019, Online URL: http://dx.doi.org/10. from The University of British Columbia, Vancouver,
17632/f4c2b4n755.1. BC, Canada, in 2007. From 1994 to 1999, he was with
[65] X. He, S. Xu, Artificial Neural Networks, Springer, Berlin, Heidelberg, 2010, IBM wtc, Giza, Egypt. From 2000 to 2003, he was with
pp. 20–42, http://dx.doi.org/10.1007/978-3-540-73762-9_2, (Chapter 2). Nokia Networks, Burnaby, BC, Canada. He joined Qatar
[66] M. Dorofki, A.H. Elshafie, O. Jaafar, O.A. Karim, S. Mastura, Comparison of University in 2007, where he is currently an Associate
artificial neural network transfer functions abilities to simulate extreme Professor of Electrical Engineering. He is also a senior
runoff data, Int. Proc. Chem. Biol. Environ. Eng. 33 (2012) 39–44, URL: member of the technical staff with Qatar Mobility
http://www.ipcbee.com/vol33/008-ICEEB2012-B021.pdf. Innovation Center. His research interests cover physical
[67] L. Bottou, Large-scale machine learning with stochastic gradient descent, layer security techniques, information theoretic aspects
in: Y. Lechevallier, G. Saporta (Eds.), Proceedings of COMPSTAT’2010, of communication systems, and radar and RF sensing techniques.
Physica-Verlag HD, Heidelberg, 2010, pp. 177–186, http://dx.doi.org/10.
1007/978-3-7908-2604-3_16. Aiman Erbad is an Associate Professor at the Computer
[68] X. Zeng, T.R. Martinez, Distribution-balanced stratified cross-validation Science and Engineering (CSE) Department and the
for accuracy estimation, J. Exp. Theor. Artif. Intell. 12 (1) (2000) 1–12, Director of Research Planning and Development at
http://dx.doi.org/10.1080/095281300146272. Qatar University. Dr. Erbad obtained a Ph.D. in Com-
[69] T.-T. Wong, Performance evaluation of classification algorithms by k- puter Science from the University of British Columbia
fold and leave-one-out cross validation, Pattern Recognit. 48 (9) (2015) (Canada), and a Master of Computer Science in Em-
2839–2846, http://dx.doi.org/10.1016/j.patcog.2015.03.009. bedded Systems and Robotics from the University of
Essex (UK). Dr. Erbad received the Platinum award from
H.H. The Emir Sheikh Tamim bin Hamad Al Thani at
the Education Excellence Day 2013 (Ph.D. category).
Mohammad Fathi Al-Sa’d received his B.Sc. and M.Sc. Dr. Erbad research interests span cloud computing,
degrees in Electrical Engineering from Qatar University, multimedia systems and networking, and his research is published in reputed
Qatar, in 2012 and 2016 respectively. He specialized international conferences and journals.
in signal processing and graduated with honors under
professor Boualem Boashash supervision. He worked as
a Research Assistant at Qatar University, and currently
he is a Researcher and a Doctoral student at Laboratory
of Signal Processing, Tampere University of Technology,
Finland. He has served as a technical reviewer for
several journals, including Biomedical Signal Processing
and Control, and IEEE Access. His research interests in-
clude EEG analysis and processing, time–frequency array processing, information
flow and theory, modeling, optimization and machine learning.

You might also like