Single and Multiple Drones Detection and Identification Using RF Based Deep

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Expert Systems With Applications 187 (2022) 115928

Contents lists available at ScienceDirect

Expert Systems With Applications


journal homepage: www.elsevier.com/locate/eswa

Single and multiple drones detection and identification using RF based deep
learning algorithm
Boban Sazdić-Jotić a, *, Ivan Pokrajac b, Jovan Bajčetić a, Boban Bondžulić a, Danilo Obradović a
a
Military Academy, University of Defence in Belgrade, Veljka Lukića Kurjaka 33, Belgrade, Serbia
b
Military Technical Institute, Ratka Resanovića 1, Belgrade, Serbia

A R T I C L E I N F O A B S T R A C T

Keywords: Unmanned aerial systems, especially drones have gone through remarkable improvement and expansion in
Anti-drone system recent years. Drones have been widely utilized in many applications and scenarios, due to their low price and
Classification ease of use. However, in some applications drones can pose a malicious threat. To diminish risks to public se­
Deep learning algorithms
curity and personal privacy, it is necessary to deploy an effective and affordable anti-drone system in sensitive
Detection
Drone
areas to detect, localize, identify, and defend against intruding malicious drones. This research article presents a
Multiple drones new publicly available radio frequency drone dataset and investigates detection and identification methodologies
to detect single or multiple drones and identify a single detected drone’s type. Moreover, special attention in this
paper has been underlined to examine the possibility of using deep learning algorithms, particularly fully con­
nected deep neural networks as an anti-drone solution within two different radio frequency bands. We proposed
a supervised deep learning algorithm with fully-connected deep neural network models that use raw drone
signals rather than features. Regarding the research results, the proposed algorithm shows a lot of potentials. The
probability of detecting a single drone is 99.8%, and the probability of type identification is 96.1%. Moreover,
the results of multiple drones detection demonstrate an average accuracy of 97.3%. There have not been such
comprehensive publications, to this time, in the open literature that have presented and enlightened the problem
of multiple drones detection in the radio frequency domain.

1. Introduction malicious intentions. Therefore, it is of great importance to have an


effective anti-drone (ADRO) system in security-sensitive areas that de­
As a fast-developing research area with various improvements, mand timely detection, identification, localization, and protection from
several terms are currently used in literature to entitle unmanned aerial the unauthorized intrusion of drones. To accomplish such a challenging
systems (UAS). It is essential to comprehend the difference between task, it is necessary to engage complex and various types of sensors.
confusing terms to understand complex UAS usage in many areas. The These sensors are combined to do the very difficult task of finding and
term UAS is generally used to describe the entire operating equipment locating aerial targets (drones are small, very agile, and they can operate
including the aircraft, the ground control station from where the aircraft at relatively low altitudes) in a complex environment (especially in
is operated, and the wireless data link (Hassanalian & Abdelkefi, 2017; urban areas). The use of these sensors requires the synergy of different
Mitka & Mouroutsos, 2017; Yaacoub et al., 2020). technologies, i.e. fusion of radar, audio, video, and/or radio frequency
In addition to that, most of the researchers use the term “drone” in surveillance technologies.
everyday language (slang) and research works (jargon), instead of any For the purpose of this paper, a new radio frequency (RF) dataset was
other official term. Sometimes, both terms are correspondingly used, created and provided in (Sazdić-Jotić et al., 2020). This dataset contains
drone for autonomous aircraft and UAS for the complete system. It is radio signals of various drones working under different flight modes. To
important to highlight that there is an enormous usage growth of UAS in this end, three experiments were conducted to record the RF signals of
many applications which arises an additional need to regulate air traffic. several drones in different scenarios (including the occurrences where
However, it is unrealistic to expect that every drone pilot (operator) will several drones operate simultaneously). Afterward, such a RF drone
comply with the air traffic regulations, especially those that have dataset was used to test the ADRO system for drone detection and

* Corresponding author.
E-mail addresses: [email protected] (B. Sazdić-Jotić), [email protected] (I. Pokrajac), [email protected] (D. Obradović).

https://doi.org/10.1016/j.eswa.2021.115928
Received 27 December 2020; Received in revised form 4 July 2021; Accepted 16 September 2021
Available online 23 September 2021
0957-4174/© 2021 Elsevier Ltd. All rights reserved.
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928

identification based on deep neural networks (DNN) which are used in (DOA) are applicable with certain restrictions (e.g. multipath and non­
(Al-Sa’d et al., 2019). Furthermore, the ADRO system for multiple –line–of–sight propagation). Challenges in RF drone detection (ambient
drones detection was created and its recognition accuracy was verified noise or RF background, multipath, etc.) can be an invincible obstacle in
on this RF drone dataset. some cases causing a large false alarm rate.
One of the main challenges was to create an appropriate algorithm
for multiple drones detection. Moreover, it was imperative to label 2.2. Deep learning (DL) algorithms
training data accurately and adjust training parameters for the best re­
sults. The key advantage of the proposed algorithm is a high accuracy, A respectable ADRO system should have several different types of
regardless of using a non-complex technique (Short-Time Fourier sensors, i.e. it should be composed of heterogeneous sensor units com­
transform, STFT) for preprocessing RF signals. In this manner, the pro­ bined to find application in practice. Such a system, on the other hand,
posed algorithm performs better than other prominent deep learning represents a compromise (tradeoff) between well-timed detection, long
(DL) algorithms because, after STFT calculation, the data is ready for the detection range, high detection probability, and sensor imperfections. It
training process, without any additional operations. The most critical should be noted that the performance of any ADRO system depends on
phase of the proposed algorithm is the preparation of the training data numerous factors: properties of the target (drone dimensions, speed,
because it is time-consuming. Although we have obtained better results communication, and navigational system), the surveillance environ­
than the authors in (Al-Sa’d et al., 2019), there is still a need to increase ments (RF traffic density, urban or rural areas, and atmospheric condi­
the recognition rate of the flight modes of drones that are made by the tions), the hardware parameters (receiver sensitivity, antenna
same manufacturer. characteristics, OES sensor quality, antenna azimuth, and elevation
The rest of the paper is organized as follows: section 2 is an overview directivity) and the corresponding algorithms. Using expensive sensors,
of the related works in the area of ADRO studies, section 3 describes the or combining multiple heterogeneous sensors is not as effective for the
system model of the proposed algorithm and the experiments based on detection and identification of drones, as is the usage of a good algo­
it, in section 4 the results and discussions are presented, and finally, the rithm. DL algorithms have demonstrated excellent results when applied
conclusion and future works are given in section 5. to different types of problems such as the image object detection and
identification in (Krizhevsky et al., 2017; Nair & Hinton, 2009; Pathak
2. Related works et al., 2018), digital signal processing (Peng et al., 2019; Zha et al., 2019;
Zhou et al., 2019), radar detection and identification in (Alhadhrami
In this section, recent RF based ADRO approaches used for detection et al., 2019), speech and text recognition in (Karita et al., 2018; Y. Kim,
and identification of intruding drones are reviewed. Moreover, the state- 2014), and in all other areas of everyday life. Furthermore, multimodal
of-the-art DL algorithms are introduced and their implementation in DL algorithms presented in (Narkhede et al., 2021; Patel et al., 2015)
ADRO systems is considered. were presented as novel approaches and implementation for sensor
fusion in various applications.
2.1. Radio frequency drone detection DL algorithms have also found their application in exploiting various
data from different sensors to detect and identify drones. Most of these
RF drone detection is specific in several ways compared to other studies include a mandatory step where frequency or time–frequency
drone detection methods. First of all, radar, audio, and optoelectronic representation is calculated and saved to an image that is later used as
sensors (OES) detect and collect well-known features just from a drone input data for existing DL algorithms already proved in object detection
(an autonomous aircraft), while RF sensors perform monitoring of the and identification problems. However, there is a small number of related
UAS’s radio communication links prearranged between the two partic­ research papers where raw RF data are used as a solution (MathWorks,
ipants: a drone and the corresponding ground control station (the flight 2021). Instead of using an image for DNN input data, rudimentary RF
controller operating by the drone pilot). Second, RF based drone signal transformation is performed and the output is then used as a DNN
detection must collect features over a wide frequency range to detect a input for RF based DL algorithm.
radio communication of UAS. Finally, in a real RF environment, the A comparative analysis of the most recent and prominent studies in
existence of many other radio signals (e.g. Wi-Fi or Bluetooth) sharing the field of detection and identification of drones based on the DL al­
the same frequency band with the UAS, makes RF based detection quite gorithms is shown in Table 1. The analysis was performed on the ob­
challenging. In (Peacock & Johnstone, 2013), identifying the media tained results as well as the challenges, benefits, and disadvantages of
access control (MAC) address of a drone is presented as a feasible al­ the used algorithms.
gorithm. However, this algorithm is only capable of detecting drones It is important to note that the RF detection and identification of the
with open MAC addresses, because it can be easily spoofed and can UAS (drones and flight controllers) by using state-of-the-art DL algo­
provide diverse interpretations. In addition to this, a huge problem can rithms is the primary objective of all studies that are presented in
be to create and update a comprehensive dataset containing MAC ad­ Table 1. Additionally, the identification of the drone flight modes is
dresses of all drones, because there is an ever-increasing variety of examined only in (Al-Emadi & Al-Senaid, 2020; Al-Sa’d et al., 2019) and
drones. Some commercial ADRO systems exploit the knowledge of the in this paper. More importantly, all authors used RF signals from 2.4
communication protocols to detect, identify, locate and in some cases GHz ISM frequency band for their studies and no one paper presents the
hijack (take over) the drone to land it at a predefined location (D-Fend, 5.8 GHz ISM band research results. Another interesting fact is that only
2021). An improved solution is the usage of the radio signal’s features authors in (Abeywickrama et al., 2018; Zhang et al., 2018) investigated
for drone detection. Based on this, the authors in (Nguyen et al., 2016) scenarios in the outdoor environment. Most of the authors used FFT of
proposed a drone detection algorithm based on specific signatures of a raw RF signal (i.e. spectrum matrix) or spectrogram images as DNN
drone’s body vibration and body shifting that are embedded in the Wi-Fi input, except in (Zhang et al., 2018). Furthermore, in (Basak et al.,
signal transmitted by a drone. Similarly, RF drone fingerprints (statis­ 2021), the authors investigated the impact of additive white Gaussian
tical features of a radio signal) with machine learning (ML) algorithms noise (AWGN) and multipath propagation on the DL algorithms accu­
are presented in (Ezuma et al., 2020) for the same objective. However, racy. It is also interesting that in the same work, the authors examined
different techniques like multistage classification, prior RF signal the possibility of detecting multiple drones, but with simulated data.
detection, noise removal, or multiresolution analysis were used in this They used previously recorded RF signals (not overlapping in frequency
research before ML algorithms to improve detection results. Addition­ spectrum) from flight controllers, then they artificially summed those
ally, drone localization algorithms based on measurements of received signals and created DNN input for multiple drones detection scenarios.
signal strength (RSS), time of arrival (TOA), and direction of arrival In this paper, the power RF spectrum of the raw radio signal is

2
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928

Table 1 Table 1 (continued )


Related works on detection and identification of drones using DL algorithms. Reference The proposed Results [%] Remarks
Reference The proposed Results [%] Remarks algorithm and/or (challenges,
algorithm and/or (challenges, used features benefits, and
used features benefits, and disadvantages)
disadvantages)
- Convolutional - UAS protocols - Detection and
(Al-Sa’d et al., - Three fully - Detection, type, - Indoor Neural Network detection jamming of the
2019) connected Deep and flight mode conditions. (2-D CNN). scenario. UAS flight
Neural identification - The first study - DNN input: - 3 UAS protocol controllers.
Networks (FC- scenario. which spectrogram types (Taranis, - Only 2.4 GHz ISM
DNN). - 3 drones with 4 investigated images. Lightbridge, band.
- DNN input: FFT flight modes. drone flight mode and Phantom - Without noise
of raw RF signal - Drone detection identification. 2). consideration.
(spectrum accuracy: - Only 2.4 GHz ISM - Indoor
matrix). 99.7%. band. accuracy:
- Drone type - Without noise 97.25%.
accuracy: consideration. (Abeywickrama - Sparse - Direction - Outdoor
84.5%. et al., 2018) denoising finding (DF) conditions.
- Flight mode autoencoder scenario. - Single-channel
accuracy: (SDAE-DNN). - Outdoor receiver and four-
46.3%. - DNN input: the accuracy: the element DF an­
(Al-Emadi & Al- - Convolutional - Detection, type, - Indoor sum of I/Q data worst DF tenna (8x45◦
Senaid, 2020) Neural Network and flight mode conditions. from N accuracy is 92% wide antenna
(1-D CNN). identification - The dataset from antennas. for the 180◦ sectors).
- DNN input: FFT scenario. (Al-Sa’d et al., antenna sector. - Only 2.4 GHz ISM
of raw RF signal - 3 drones with 4 2019). band.
(spectrum flight modes. - The 1-D CNN - Without noise
matrix). - Drone detection model out­ consideration.
accuracy: performs the re­
99.8%. sults from (Al-
- Drone type Sa’d et al., 2019). calculated, stored as rows in the matrix (spectrum matrix) which are
accuracy: - Only 2.4 GHz ISM then used as inputs to the FC-DNN. Experiments were performed in in­
85.8%. band.
door conditions, and DJI’s commercial of the shelf (COTS) drones were
- Flight mode - Without noise
accuracy: consideration. used. The contribution of this research is in several facts: a new dataset
59.2%. of RF drone signals is made publicly available, multiple drones operating
(Zhang et al., - Back - Drone detection - Indoor and simultaneously were successfully detected, and RF drone signals in 2.4
2018) Propagation scenario. outdoor and 5.8 GHz frequency bands reserved internationally for industrial,
Neural Network - N/A drones. conditions.
(BPNN). - Indoor - The BPNN model
scientific, and medical (ISM) purposes were collected and examined.
- DNN input: accuracy: outperforms the It is essential to notice that there are currently a small number of
statistical 92.67% within algorithms based publicly available datasets that contain RF drone signals. Because of this,
features (slope, 5 m. on the statistical research of various detection and classification algorithms is limited.
kurtosis, and - Outdoor features.
Also, to the best of our knowledge, no significant open literature
skewness). accuracy: 82% - Only 2.4 GHz ISM
within 3 km. band. research was conducted aiming at the detection of multiple drones using
- With noise real RF signals (except with radar signals so far). Moreover, the analysis
consideration. of RF signals from both ISM bands contributed to confirm our pre­
Empirical mode liminary assumptions and comparing the results of detection and iden­
decomposition
(EMD) is used to
tification between different frequency bands.
remove the noise
from the RF 3. Methodology
signal).
- Additional
consideration:
In this section, the system model is presented. This system model is
distance from used to create a new RF drone dataset and to test the detection and
UAS to sensor. identification of single and multiple drones with the proposed DL
(Basak et al., - Deep Residual - Detection and - Indoor conditions algorithm.
2021) Neural Network type (dataset is
(DRNN). identification recorded in an
- DNN input: FFT scenario. anechoic 3.1. System model
of raw RF signal - 9 drones. chamber).
(spectrum - Indoor - Multiple drones
matrix). accuracy: classification
For the purposes of this research, the system model used is similar to
99.88%. with simulated the model presented in (Al-Sa’d et al., 2019). The similarity is in two
RF signals (RF main subsystems and corresponding components. These two subsystems
signals are are the RF dataset development subsystem and the drone detection and
artificially
identification subsystem which are shown in Fig. 1.
added).
- Only 2.4 GHz ISM The main difference from the research explained in (Al-Sa’d et al.,
band. 2019) is in the RF dataset development subsystem, because this system
- With noise and model is designated to implement the recordings of new RF drone’s data
multipath from 2.4 and 5.8 GHz ISM bands. It is important to note that all re­
consideration.
(Parlin et al., - Indoor
cordings were performed separately, i.e., first for 2.4 and then for 5.8
2020) conditions. GHz ISM band. Besides, the particular and unique scenario (two and
three drones operate in the same space and time domain) was recorded
in two separate ISM bands. The subsystem for RF drone dataset

3
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928

Fig. 1. System model: (1) Equipment Under Test (EUT), (2) RF sensor with antennas, (3) RF drone dataset, (4) RF drone dataset verification, (5) RF signal pre­
processing and labeling, (6) FC-DNN models, and (7) the system output.

development, which is also described in detail in (Šević et al., 2020), Spectrum Analyzer, two receiving antennas (for two separate ISM
consists of the RF sensor and equipment under test (EUT). For the pur­ bands) with corresponding cables and connectors were used. The Real-
pose of the data acquisition and recording, the Tektronix Real-Time Time Spectrum Analyzer instantaneously recorded bandwidth of 110

Fig. 2. RF sensor and EUT: a) Tektronix Real-Time Spectrum Analyzer RSA 6120A, b) receiving antennas, c) DJI Phantom IV Pro, d) DJI Mavic 2 Enterprise, and e)
DJI Mavic 2 Zoom.

4
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928

MHz within 2.4 or 5.8 GHz ISM bands and saved records directly in a *. objective of the verification stage of the RF drone dataset was to check
mat format that is suitable for loading and analyzing in the MatLab out if it is possible to visually differentiate types of drones and types of
application. It is important to notice that the acquisition length of each flight modes in the calculated spectrograms. The secondary objective
RF signal was 450 ms and the sampling frequency was 150 MSample/s was to determine the elementary physical characteristics of the RF drone
for instantaneous bandwidth of 110 MHz, which produces a *.mat file of signals such as signal type (fixed frequency signal, frequency hopping
around 500 MB for every recording in the experiment. Each saved file signal, signal with direct sequence or burst), total channel number,
also contains additional information (metadata) about experiment pa­ channel central frequency, channel bandwidth, total channels (occupied
rameters that can be used after importing into the MatLab application. bandwidth), channel raster (frequency distance between channels), hop
duration and dwell time for each drone’s recording (see supplementary
3.1.1. Equipment under test (EUT) material). As a result, all three types of drones and their operational
For the EUT, three different UAS (DJI Phantom IV Pro, DJI Mavic 2 modes were successfully differentiated. These results were not used as
Zoom, and DJI Mavic 2 Enterprise with the corresponding flight con­ an input for FC-DNN models, but just for checking out the consistency of
trollers) were used (DJI, 2021). Fig. 2 shows Tektronix Real-Time the RF drone dataset.
Spectrum Analyzer RSA 6120A with two receiving antennas (for 2.4 Examples of spectrograms calculated from the recorded RF activities
and 5.8 GHz ISM bands) and EUT (DJI Phantom IV Pro, DJI Mavic 2 in 2.4 GHz ISM band are shown in the following figures. In the begin­
Enterprise, and DJI Mavic 2 Zoom, respectively from left to right). ning, as an illustration, Fig. 3 provides a detailed explanation of all the
A drone pilot uses the flight controller to send RF commands to components on the spectrogram of the RF drone signal, to better un­
operate the autonomous aircraft by changing flight (operational) modes, derstand the basic method of the drone operation.
altitude (position), and speed. Most drones can operate in 2.4 or 5.8 GHz Two distinguished components can be seen in Fig. 3: the uplink for
ISM bands, usually in one or simultaneously in both, when communi­ command-and-control signals and the downlink for the video signal. The
cation is disrupted, automatically or manually adjusted via flight uplink for command-and-control signals is marked with black circles,
controller. while the downlink for the video signal is marked with a blue rectangle.
It is unambiguous that the downlink is a fixed frequency emission (the
3.2. RF dataset development subsystem central frequency does not change during the operation) and the uplink
is a frequency hopping emission (the central frequency changes ac­
Data acquisition was performed for each drone separately, and each cording to a predefined rule during the operation). All spectrograms of
time four distinctive flight (operational) modes were recorded. In order all drones that were part of the experiments are shown in the supple­
to analyze the whole radio communication traffic, each data acquisition mentary material.
process was organized into five steps: Further, Fig. 4 shows the spectrograms of one drone with four
EUT is off. Drone is turned off. RF background (ambient noise) is distinctive flight modes, Fig. 5 illustrates spectrograms of a single mode
recorded. For a more genuine approach, random Wi-Fi and Bluetooth of operation for different drones, and finally, Fig. 6 presents snapshots of
radio communications at the beginning were induced. the situation when multiple drones (two and three) operate
EUT is on and performing the connecting procedure with the simultaneously.
flight controller. Drone is turned on by the operator. Drone is con­ Moreover, several additional facts were established which can be of
necting to the flight controller. The recording is performed until the interest in further studies: all three drones operate in a designated fre­
drone is connected to the flight controller. quency range which is defined by DJI, all three drones use the spread
EUT is hovering. The operator lifts off the drone and puts it in a state spectrum (SS) technique based on frequency hopping (FH) for commu­
of hovering (the drone is flying without altering altitude and position, i. nication between drone and flight controller, and the drone’s FH emis­
e., the operator is not giving any commands). The recording is per­ sion is very simple and comparable to sweep signals. Also, it is
formed while the drone is hovering (maintaining height and position) interesting to note that DJI Phantom IV Pro has the same principle of FH
without any operator commands. emission in all operational (flight) modes (see supplementary material).
EUT is flying. The operator issues some basic commands while the
drone is moving left, right, down, and up. The recording is performed 3.3. Drone detection and identification subsystem
while the drone is flying (the drone is changing the altitude and position
all the time) following the commands from the operator. The second part of the system model – the drone’s detection and
EUT is flying and recording a video. The operator enables a video identification subsystem, remained similar as in (Al-Sa’d et al., 2019)
recording on the drone and issues some basic commands while the drone and the three FC-DNN models were used to verify the consistency of the
is moving left, right, down, and up. The recording is performed while the new RF drone dataset (on the subject of drone detection, drone identi­
drone is flying and the video is being transmitted and recorded to the fication, and drone type and flight mode identification). The additional,
flight controller. fourth FC-DNN model for multiple drones detection is the crucial dif­
This step-by-step procedure was done for all drones and constitute ference introduced in this paper. Also, there were made slight changes in
one experiment. Firstly, three experiments were executed, one with each the data labeling procedure to validate the possibility to detect situa­
drone, one with two drones, and one with three drones, with 25 re­ tions when two or three drones operate concurrently.
cordings in total (15 recordings for the first experiment, 5 recordings for
the second, and 5 recordings for the third experiments) in 2.4 GHz ISM 3.3.1. Signal preprocessing
band. Then, the whole procedure was repeated for 5.8 GHz ISM band, The custom-made MatLab functions were used to perform signal
with another 25 recordings (or 50 recordings in total). Again, it is preprocessing and labeling steps, required for necessary data prepara­
important to point out that each experiment was conducted in labora­ tion. Such data were intended to be used as an input to FC-DNN models.
tory (indoor) conditions where the RF background recording was firstly In order to preprocess and prepare raw data obtained from the first part
executed. of the system model, signal segmentation and simple calculation of the
The final stage of the RF dataset development subsystem was to power RF spectrum were performed for each segment of signals in both
perform time–frequency analysis (TFA) in the MatLab application over- ISM bands. The signal segmentation was performed by dividing the
collected raw RF drone signals to verify the RF drone dataset (see Fig. 1). whole acquired RF signal into snapshots of data consist of 100.000
MatLab embedded spectrogram function based on Short–Time Fourier samples. This process was performed to speed up signal preprocessing
Transform (STFT) was used as one of TFA’s basic tools before the drone and to perform data augmentation because each segment of each RF
detection and identification subsystem was engaged. The primary signal was used as an FC-DNN input. It is important to emphasize that

5
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928

Fig. 3. Characteristic elements on the spectrogram of the RF drone signal.

(a) connecting mode (b) hovering mode

(c) flying mode (d) flying and video recording mode


Fig. 4. RF spectrograms of DJI Mavic 2 Enterprise in 2.4 GHz ISM band.

simple signal segmentation without overlapping windows and without limitations that result from computing large FFT (MathWorks, 2021).
discarding noisy segments (segments without useful signal) was used in Additionally, data scaling of FC-DNN inputs as a recommended
this research. Moreover, data augmentation and accuracy of the FC-DNN preprocessing step was performed by using the normalization technique
model can be improved by using an overlapping window for signal (to rescale input variables before training a neural network model in the
segmentation, as well as with discarding segments where there is only range of zero to one).
noise (e.g. between two hops). For power RF spectrum calculation, a Subsequently, data aggregation (preprocessed and labeled RF signals
modified built-in MatLab function (pspectrum) was used with 2048 from all experiments) was performed and the results are stored in four
frequency bins without the DC component of the RF signal (zero mean matrices (two matrices for 2.4 GHz and two for 5.8 GHz ISM bands
option). This function finds a compromise between the spectral resolu­ representing the power RF spectrum). FC-DNN’s input data specification
tion achievable with the entire length of the signal and the performance is presented in detail in Table 2.

6
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928

(a) RF background (b) DJI Phantom IV Pro

(c) DJI Mavic 2 Zoom (d) DJI Mavic 2 Enterprise


Fig. 5. RF spectrograms for RF background and flying and video recording mode of different drones in 2.4 GHz ISM band.

(a) two drones during flying and video recording mode (b) three drones during flying and video recording mode
Fig. 6. RF spectrograms when multiple drones (two and three) operate simultaneously in 2.4 GHz ISM band.

Table 2
FC-DNN’s input data specifications.
Experiment No.1 Experiment No.2 Experiment No.3
(single drone) (two drones) (three drones)

RF signal Segments No. 670 segments 670 segments 670 segments


Resulting matrix [2048 × 670] [2048 × 670] [2048 × 670]
2.4 GHz ISM band Signals No. 15 signals 5 signals 5 signals
Segments No. 15 × 670 segments 5 × 670 segments 5 × 670 segments
Resulting matrix [2048 × 10,050] [2048 × 3350] [2048 × 3350]
2.4 GHz input data Concatenated matrix (FC-DNN input) 2.4 GHz input matrix FC-DNN No. 1, 2, 3 [2048 × 10,050] 2.4 GHz input matrix FC-DNN No. 4 [2048 × 16750]
(also consist data from the first experiment)
5.8 GHz ISM band Signals No. 15 signals 5 signals 5 signals
Segments No. 15 × 670 segments 5 × 670 segments 5 × 670 segments
Resulting matrix [2048 × 10,050] [2048 × 3350] [2048 × 3350]
5.8 GHz input data Concatenated matrix (FC-DNN input) 5.8 GHz input matrix FC-DNN No. 1, 2, 3 [2048 × 10,050] 5.8 GHz input matrix FC-DNN No. 4 [2048 × 16750]
(also consist data from the first experiment)

7
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928

It must be mentioned that each concatenated matrix from the first Table 3
experiment was used as an input in the first three DNN models for Specification of RF drone dataset for one ISM band.
solving detection and identification problems. Additionally, all concat­ FC- Class name Class Signal Segments Ratio
enated matrices from all the experiments were used as an input in the DNN label No. No. [%]
fourth FC-DNN model for solving the multiple drones detection problem. model (100000
No. samples)

3.3.2. Data labeling 1 No drone “1” 3 2010 20.00


Labeling for all FC-DNN models was performed by adding rows at the (RF
background)
end of the corresponding aggregated matrices. For detection and iden­ Drone “2” 12 8040 80.00
tification purposes, rows that were added to the concatenated matrices 2 No drone “1” 3 2010 20.00
from the first experiment are as follows: the first row is consisted of (RF
labels for detection of drones, the second row is consisted of labels for background)
DJI Phantom 4 2680 26.67
drone’s type identification, and the third row is consisted of labels for “2”
IV Pro
drone’s flight mode identification. For the detection of multiple drones, DJI Mavic 2 “3” 4 2680 26.67
the same labeling principle was used, but one more row of labels was Zoom
added. It should be perceived that labels in each row determine whether DJI Mavic 2 “4” 4 2680 26.67
the signal’s segment (matrix column) represents the RF background, the Enterprise
3 No drone 3 2010 20.00
presence of a drone, or another specific situation (the type of drone,
“1”
(RF
flight mode of drone, or presence of multiple drones). background)
To prepare FC-DNN input data and to create a new RF drone dataset, DJI Flight mode “2” 1 670 6.67
a Binary Unique Identifier (BUI) proposed in (Al-Sa’d et al., 2019) was Phantom 1
IV Pro
used for data notation. The good practice was followed and the new RF
flight
drone dataset was created using suggested parameters such as the mode
number of experiments (E), the total number of drones (D), and the total Flight mode “3” 1 670 6.67
number of flight modes, including RF background (F). Finally, a new 2
dataset was completed by using E = 3, D = 3 and F = 5. Details of this RF Flight mode “4” 1 670 6.67
3
drone dataset showing the number of segments for each class relative to
Flight mode “5” 1 670 6.67
the used FC-DNN models are presented in Table 3. 4
Based on the presented specification of the RF drone dataset, data DJI Mavic Flight mode “6” 1 670 6.67
labeling was performed for detection and identification purposes for 2 Zoom 1
flight
different FC-DNN:
mode
Drone detection. The first FC-DNN model uses a data label for the Flight mode “7” 1 670 6.67
presence of drone detection. This label represents the drone absence 2
class with “1′′ and presence with “2”. All recorded data from the first Flight mode “8” 1 670 6.67
experiment were used. 3
Flight mode 1 670 6.67
Drone type identification. The second FC-DNN model uses a data
“9”
4
label for drone type identification. This label contains four different DJI Mavic Flight mode “10” 1 670 6.67
designations: “1′′ – RF background, “2” – DJI Phantom IV Pro, “3” – DJI 2 1
Mavic 2 Zoom, and “4” – DJI Mavic 2 Enterprise. Like in the previous Enterprise
flight
case, all recorded data from the first experiment were used.
mode
Drone type and flight (operational) mode identification. The third Flight mode “11” 1 670 6.67
FC-DNN model uses a data label for drone type identification. This label 2
contains thirteen different designations: “1′′ – RF background, “2” – DJI Flight mode “12” 1 670 6.67
Phantom IV Pro connected, “3” – DJI Phantom IV Pro hovering, “4” – DJI 3
Flight mode 1 670 6.67
Phantom IV Pro flying, “5” – DJI Phantom IV Pro flying and recording
“13”
4
video, “6” – DJI Mavic 2 Zoom connected, “7” – DJI Mavic 2 Zoom 4 No drone “1” 5 3350 20.00
hovering, “8” – DJI Mavic 2 Zoom flying, “9” – DJI Mavic 2 Zoom flying (RF
and recording video, “10” – DJI Mavic 2 Enterprise connected, “11” – background)
One drone 12 8040 48.00
DJI Mavic 2 Enterprise hovering, “12” – DJI Mavic 2 Enterprise flying, “2”
Two drones “3” 4 2680 16.00
and “13” – DJI Mavic 2 Enterprise flying and recording video. For drone Three drones “4” 4 2680 16.00
type and flight mode identification purposes, recorded data from the
first experiment were used.
Drone number detection. The fourth FC-DNN model uses a data label ( )

for the number of drone detection. This label contains four different y=f ωj xj + b (1)
descriptions: “1′′ – RF background, “2” – one active drone, “3” – two j
active drones, and “4” – three active drones. Recorded data from all
experiments were used. where xj is input to the neuron, ωj are weights, b is bias, j is input size, f is
activation function, and y is output. Combining multiple neurons, it is
3.3.3. FC-DNN model possible to create a simple fully-connected neural network that is con­
After the signal preprocessing and data labeling, the detection and sisted of input, one intermediate (so-called hidden), and output layers.
identification of the drones were performed. The supervised DL algo­ The result of hidden layer values is given by:
rithm was engaged with four FC-DNN models where each one is con­ ( )

sisted of an input layer, hidden layers, and an output layer. The yi = f ωi,j xj + bi (2)
fundamental building block of each FC-DNN (i.e. feedforward neural j

networks) is the fully-connected neuron, which can be defined by the


formula (Winovich, 2021): where j is the size of the input layer and i is the size of the hidden layer.

8
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928

The final output from this shallow fully-connected neural network, z is layers are consist of 256, 128, and 64 neurons, respectively. Comparing
the sum of all results obtained from the hidden layer, thus it is presented to the original FC-DNN model introduced in (Al-Sa’d et al., 2019), which
as: had only three fully-connected hidden layers with ReLU activation
( ) function, this is an enhancement. This can be elicited by Fig. 7 that
∑ (z)
z=f ωl yl + b (z)
(3) shows an example of the fourth FC-DNN model used for multiple drones
l detection.
The input layer of the fourth FC-DNN model is the size of the power
where l is the number of hidden layers and b(z) refers to weights and RF spectrum calculated with 2048 frequency bins. Next, hidden layers
biases values from the corresponding layer. Using matrix notation these organized in three sets of hidden layers are engaged. Although unusual,
equations can be expressed more concisely. For example, underlying a combination of two activation functions was used for each set of
mathematical relations for FC-DNN with two hidden layers are shown hidden layers in the FC-DNN model. The sigmoid function is affected by
with the following equations: saturation issues which are explained in (Manessi & Rozza, 2018), so the
( ) ReLU function is engaged to overcome such weakness and improve the
y(1) = f W(1) x + b(1) (4)
accuracy results of FC-DNN. Finally, the output layer of the fourth FC-
( ) DNN is the fully-connected output layer of four neurons with the Soft­
y(2) = f W(2) y(1) + b(2) (5) Max activation function. For training and validation process following
( ) FC-DNN parameters during experiments were used: stochastic gradient
z = f W (z) y(2) + b(z) (6) descent (SGD) optimization algorithm with backpropagation for the
Equations (4), (5), and (6) denote the results obtained from the first, error minimization that uses a training dataset to update a model, Adam
the second, and the output layer, respectively. optimizer (Zhong et al., 2020) for the classification mean square error
Accordingly, the proposed FC-DNN models can be described with the minimization, stratified K–fold cross-validation (K = 10) for the bias
following input–output relations (Al-Sa’d et al., 2019): minimization (to overcome the difference between classes); hyper­
parameter of SGD that controls the number of training samples (batch
z(l) (l) (l) (l−
i = f (W zi
1)
+ b(l) ) (7) size = 20), and hyperparameter of SGD that controls the number of
complete passes through the training dataset (total number of epochs =
where i is the number of input RF segment; z(0)
i = yi is the power 100).
(l− 1)
spectrum of i-th input RF segment; zi is the output of the layer l − 1
and the input to the layer l; z(l)
i is the output of the layer l and the input to
3.4. Implementation
the layer l + 1; z(l)
i = ci is the classification vector for i-th input RF
[ ]T
segment; b = b1 , b(l)
(l) (l)
2 , ⋯, bH(l)
(l)
is the bias vector of layer l; f (l) is the For implementation purposes, the proposed FC-DNN models with
Python Anaconda version 1.9.2 with Tensorflow 2.1.0 (including Keras
activation function of layer l (l = 1, 2, ⋯, L and L − 1 is the total number
2.3.0) framework and GPU environment setup were used. The host
of hidden layers). Also, the weight matrix of layer l is designated asW(l) :
machine for this purpose was Intel(R) Core (TM) i5–9400F CPU @ 2.90
⎡ ⎤
GHz, 32 GB RAM with two GPUs GeForce RTX 2060 6 GB GDDR6 (CUDA
w(l) ⋯ w(l)
⎢ 11 1H (l− 1) ⎥
toolkit version 10.1. and cuDNN version 7.6). Existing FC-DNNs were
W (l) = ⎢
⎣ ⋮ ⋱ ⋮ ⎥ (8)
⎦ modified according to the specification of the RF drone dataset and
w(l)
H (l) 1
⋯ w(l)
H (l) H (l− 1) created four FC-DNNs using Keras to perform the following tasks: to
detect the presence of a drone, to detect the presence of a drone and
where wij is the weight between the ith neuron of layer k and jth neuron identify its type, to detect the presence of a drone, identify its type, and
(l)

of layer l − 1; H(l) is the total number of neurons in layer l; H(0) = M =


2048; and H(L) = C is the number of classes in the classification vector ci . Table 4
Average elapsed time for necessary calculations.
It is important to notice that each FC-DNN model in the proposed
algorithm has similar core architecture. Each FC-DNN model consists of Preprocessing FC-DNN FC-DNN FC-DNN FC-DNN
stage No. 1 No. 2 No. 3 No. 4
an input layer, hidden layers, and an output layer. Hidden layers can be
grouped in three separate sets, where each one consists of two fully- Average 0.254251 0.052186 0.057668 0.052373 0.052378
connected (dense) layers with the rectified linear unit (ReLU) and the elapsed
time
sigmoid activation function. Moreover, it is significant to emphasize that
[sec]
fully-connected layers in the first, the second, and the third set of hidden

Fig. 7. The fourth FC-DNN structure and settings.

9
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928

determine its flight mode, and lastly, to detect the presence of a drone was divided into several smaller ones (detection, type identification,
and identify the number of drones. Also, real-time testing was performed flight mode identification, and drone number detection), which were
with the proposed algorithm. The average computing time of the pro­ solved using four separate FC-DNN models.
posed system workflow was measured for each FC-DNN model and
presented in Table 4. 4. Results and discussions
It should be noted that the average time required to execute the
proposed algorithm on the host machine was obtained through a The main goal of this research was to create a new RF drone dataset
simulation with 100 segments from three different newly captured RF and to analyze the application possibilities of the RF based DL algo­
signals. The obtained times for the classification purposes are similar for rithms in drone detection and identification. In addition, the results of
all FC-DNN models, but the time necessary for the preprocessing stage is the multiple drones detection and identification system are presented
almost 5 times bigger. The results from Table 4 show that it is possible to and discussed.
detect/identify the drone from the received RF signal within only 0.31 s Performance assessment of the RF based ADRO system is represented
(sum of the first and one of the rest columns). It should be emphasized with accuracy, precision, recall, error, false discovery rate (FDR), false-
that this is a respectable outcome even though the preprocessing stage negative rate (FNR), and F1 scores via appropriate confusion matrices
was not implemented on the GPU platform, rather only trained FC-DNN (Al-Sa’d et al., 2019). To better understand the performance of FC-DNN
models. Based on all the above, a workflow graphic representation of the models in such a way, an example of a confusion matrix for two classes
proposed algorithm is given in Fig. 8. with an explanation of corresponding rows, columns, and cells is pre­
Workflow graphic representation represents a detailed description of sented in Fig. 9.
the drone detection and identification subsystem consisted of FC-DNN Next, in Figs. 10 and 11, the overall results of the performance
data preparation, training, and real-time classification on a pre-trained assessment of the RF based drone’s detection and identification system
model. FC-DNN data preparation is the first phase of the workflow of for both ISM bands are presented. This is convenient because it is easy to
the proposed algorithm and can be defined as the following step-by-step compare the results of detection and identification of drones in 2.4 or
procedure: loading data from the RF drone dataset, signal segmentation, 5.8 GHz ISM bands. Also, TensorFlow tracking and visualizing metrics
spectrum calculation, aggregation of data, and data labeling. Labeled such as loss and accuracy during the training process were presented in
data are afterward handled by FC-DNN models for the training process the form of graphs (see the supplementary material).
and the trained models are finally obtained and saved for the real-time First of all, Fig. 10 (a) and (b) shows the classification performance of
simulation of drone detection and identification. It should be noted that the first FC-DNN model which detects the presence of a drone in 2.4 and
four separate FC-DNN models were intentionally used for training and 5.8 GHz ISM bands, respectively. The results present an average accu­
testing phases. The main reason for such odd implementation is to racy of 98.6% and an average F1 score of 97.8% for 2.4 GHz ISM band,
satisfy the demand of the ADRO system’s tactical demands. The request and an average accuracy of 99.8% and an average F1 score of 99.6% for
was to develop independent classifiers for single and multiple drones 5.8 GHz ISM band. The absolute error of the average accuracy between
detection and identification. Because of that, the introduced problem the ISM bands for the first DNN model is 1.2%.

Fig. 8. Flow chart of the proposed algorithm.

10
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928

first paper that has presented and explained detection and identification
Averaged in both ISM bands. More importantly, a new FC-DNN model was con­
F1 F1 scores F1 scores structed, and thereafter its performance was tested with this new RF
scores drone dataset. The created system model is showing respectable results
in multiple drones detection, which is also unique research especially in
PREDICTED CLASS

Precision the RF domain.


F1 True False
1 scores predictions predictions Moreover, we have compared the proposed algorithm with CNN and
FDR Long Short-Term Memory (LSTM) deep learning algorithms. Corre­
spondingly, the DL algorithms from the literature were engaged with our
Precision RF drone dataset with the same objective. Outcomes from this
F1 False True
2 scores predictions predictions
comparative analysis are presented in Table 5.
FDR First of all, three representatives from CNN algorithms (AlexNet,
ResNet-18, and SqueezeNet) were engaged and showed promising out­
Recall Recall Accuracy comes but still below obtained results with the proposed algorithm.
These representatives provide better results for multiple drones number
FNR FNR Error detection for 2.4 GHz ISM band (for example, AlexNet achieved 1.2%
better results than the proposed approach). Also, these representatives
1 2 provide better results for flight mode identification in 2.4 GHz ISM band
(for example ResNet-18 achieved 4.8% better results). Nevertheless, the
TARGET CLASS proposed approach succeeded to accomplish better in all other scenarios
in both ISM bands. The proposed algorithm has stable detection and
Precision / False discovery rate (FDR) identification results in both ISM bands, in contrast to these three rep­
Recall / False negative rate (FNR) resentatives where the results in the 5.8 GHz band are significantly
worse than the results in the 2.4 GHz band.
F1 score for predicting each class Second, the LSTM algorithm was engaged with the same objective,
True prediction (number and percentage) but it achieved worse results than the proposed algorithm in all sce­
narios in both ISM bands. Notwithstanding, LSTM can be a supportive
False prediction (number and percentage) algorithm because it uses the same data (spectrum matrix) input as the
Averaged F1 score proposed algorithm.
Third, two DL algorithms from the literature were used for com­
Overall accuracy / Overall error parison using our RF drone dataset, and one of them is proposed for
detection purposes only (Parlin et al., 2020). The proposed algorithm
Fig. 9. Resultant rows, columns, and cells for confusion matrix with an
outperformed both DL algorithms from the literature.
explanation.
Finally, two ML algorithms from the literature are also engaged to
compare the proposed DL algorithm’s effectiveness over some conven­
Secondly, Fig. 10 (c) and (d) illustrate the classification performance tional methods. The features extraction procedure used for this purpose
of the second FC-DNN model which detects the presence of a drone and is based on (Ezuma et al., 2020) and executed within our dataset. After
identifies its type in 2.4 and 5.8 GHz ISM bands, respectively. The results the feature extraction, 15 statistical descriptors were obtained and used
present an average accuracy of 96.1% and an average F1 score of 96.0% as input for the k-nearest neighbor (kNN) and the Support Vector Ma­
for 2.4 GHz ISM band, and an average accuracy of 95.7% and an average chine (SVM) ML algorithms. The best result was achieved with the kNN
F1 score of 95.8% for 5.8 GHz ISM band. The absolute error of the algorithm (the number of neighbors is 20, the distance metric is Che­
average accuracy between the ISM bands for the second FC-DNN model byshev, and the distance weight is squared inverse). Nevertheless, this
is 0.4%. result is still worse than the proposed FC-DNN models.
Thirdly, Fig. 11 illustrates the classification performance of the third It is important to emphasize that conditions of experiments should be
FC-DNN model which detects the drone type and determines its flight taken into consideration during the comparison of the results from
(operational) mode in 2.4 and 5.8 GHz ISM bands, respectively. The Table 5. To the best of our knowledge, there are no researchers that
results demonstrate an average accuracy of 85.9% and an average F1 exploit CNN (AlexNet, ResNet-18, and SqueezeNet) or LSTM algorithms
score of 84.2% for 2.4 GHz ISM band, and an average accuracy of 86.9% for the RF detection of drones, so the corresponding fields in Table 5 are
and an average F1 score of 85.3% for 5.8 GHz ISM band. The absolute empty. Additionally, some authors did not consider all of the presented
error of the average accuracy between the ISM bands for the third FC- problems from this paper, but only drone detection or identification. Of
DNN model is 1.0%. particular note are excellent results for the identification of drone con­
Finally, Fig. 12 illustrates the classification performance of the fourth trollers obtained via ML algorithms and presented in (Ezuma et al.,
FC-DNN model, which was used for the detection of the drone’s presence 2020). However, these results were obtained after a Markov model-
and the number of detected drones in 2.4 and 5.8 GHz ISM bands, based naïve Bayes decision mechanism (for RF signals detection),
respectively. which was followed by the kNN algorithm (only for drone controllers
The results in Fig. 12 demonstrate an average accuracy of 96.2% and identification). It is worth mentioning that the proposed algorithm
an average F1 score of 96.4% for 2.4 GHz ISM band, and an average achieved slightly worse accuracy but without multistage classification,
accuracy of 97.3% and an average F1 score of 96.8% for 5.8 GHz ISM prior RF signal detection, noise removal, or multiresolution analysis.
band. The absolute error of the average accuracy between the ISM bands In general, based on the obtained results, the following conclusions
for the fourth FC-DNN model is 1.1%. can be pointed out:
The performance of all FC-DNN models is stable for both ISM bands, The average accuracy of the RF background detection persists at a
as the maximal absolute error for average accuracy is 1.2% for assess­ high rate for all FC-DNNs in both ISM bands. This can be explained by
ment of the RF based drone’s detection problem. Notwithstanding, a the fact that all experiments were conducted in indoor conditions.
simple evaluation of the obtained results still leads to the fact that the Moreover, the accuracy of the drone detection would be reduced by
average accuracy is marginally better for the 5.8 GHz ISM band. performing all experiments in outdoor conditions, because the signal-to-
It should be emphasized that to the best of our knowledge this is the noise ratio (SNR) would be lower and the impact of interference greater.

11
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928

97.8% 96.6% 99.1% 99.6% 99.4% 99.8%


2.2% 3.4% 0.9% 0.4% 0.6% 0.2%

96.6% 2010 143 93.4% 99.4% 2010 25 98.8%


Predicted Class

Predicted Class
1 1
3.4% 20.0% 1.4% 6.6% 0.6% 20.0% 0.2% 1.2%

99.1% 0 7897 100% 99.8% 0 8015 100%


2 2
0.9% 0.0% 78.6% 0.0% 0.2% 0.0% 79.8% 0.0%

100% 98.2% 98.6% 100% 99.7% 99.8%


0.0% 1.8% 1.4% 0.0% 0.3% 0.2%

1 2 1 2
Target Class Target Class

(a) drone detection with 2 classes in 2.4 GHz (b) drone detection with 2 classes in 5.8 GHz

96.0% 93.3% 96.5% 97.1% 97.0% 95.8% 97.1% 95.8% 93.7% 96.6%
4.0% 6.7% 3.5% 2.9% 3.0% 4.2% 2.9% 4.2% 6.3% 3.4%

93.3% 2008 146 54 88 87.5% 97.1% 1932 0 28 7 98.2%


1 1
6.7% 20.0% 1.5% 0.5% 0.9% 12.5% 2.9% 19.2% 0.0% 0.3% 0.1% 1.8%

96.5% 0 2521 22 2 99.1% 95.8% 0 2578 72 52 95.4%


Predicted Class

Predicted Class

2 2
3.5% 0.0% 25.1% 0.2% 0.0% 0.9% 4.2% 0.0% 25.7% 0.7% 0.5% 4.6%

97.1% 0 11 2571 34 98.3% 93.7% 65 89 2573 86 91.5%


3 3
2.9% 0.0% 0.1% 25.6% 0.3% 1.7% 6.3% 0.6% 0.9% 25.6% 0.9% 8.5%

97.0% 2 2 33 2556 98.6% 96.6% 13 13 7 2535 98.7%


4 4
3.0% 0.0% 0.0% 0.3% 25.4% 1.4% 3.4% 0.1% 0.1% 0.1% 25.2% 1.3%

99.9% 94.1% 95.9% 95.4% 96.1% 96.1% 96.2% 96.0% 94.6% 95.7%
0.1% 5.9% 4.1% 4.6% 3.9% 3.9% 3.8% 4.0% 5.4% 4.3%

1 2 3 4 1 2 3 4
Target Class Target Class

(c) drone type identification with 4 classes in 2.4 GHz (d) drone type identification with 4 classes in 5.8 GHz
Fig. 10. Confusion matrices for the first two FC-DNN models, designed for drone detection and drone type identification. See Table 3 for the class labeling.

The obtained result of 98.6% for the average accuracy of the drone both ISM bands. This phenomenon can be explained with the following
detection in 2.4 GHz ISM band is marginally worse compared to the evidence: spectrograms for RF background and RF drone radio com­
work in (Al-Sa’d et al., 2019) where the achieved result was 99.7%. The munications are quite dissimilar, and it is easy to visually distinguish
main reason for this occurrence lies in the fact that a more representative different RF activities in spectrogram when two or three drones operate
RF background (ambient noise) was used in this research (simulated simultaneously. This result is an outstanding outcome of this research,
Bluetooth and Wi-Fi signals during the first step of each experiment that and more importantly, it is independent of the ISM bands that are
were used). Furthermore, in the process of dividing the whole consid­ observed. The obtained results for the average accuracy of the drone
ered RF signal into snapshots of data consisting of 100.000 samples (RF number detection of 96.2% and 97.3% in 2.4 and 5.8 GHz ISM bands,
signal segmentation), the segments that do not contain a useful signal respectively, are better compared with the work in (W. Zhang & Li,
were not discarded. However, the obtained result of 99.8% for the 2018) where the achieved result was 94.2% but for the radar sensor.
average accuracy of drone detection in 5.8 GHz ISM band is better Also, these results are an excellent basis for possible research on the
compared to the work in (Al-Sa’d et al., 2019). This is an expected result application of DL algorithms in the detection of drone swarming.
because the 5.8 GHz ISM band tends to be less crowded than the 2.4 GHz Detection of the drone’s type is considerably improved in compari­
ISM band since fewer devices use it and because it has more allocated son to similar studies. This resulted from the fact that the signal pre­
channels. processing step is enhanced by using power spectrum calculation
The average accuracy of the drone number detection is very high in (spectral energy distribution), instead of a discrete Fourier transform

12
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928

84.2% 96.2% 72.2% 77.5% 80.9% 73.5% 81.8% 88.0% 94.4% 88.3% 87.5% 81.8% 83.4% 88.5% attributed to the fact that spectrograms for different flight modes from
15.8% 3.8% 27.8% 22.5% 19.1% 26.5% 18.2% 12.0% 5.6% 11.7% 12.5% 18.2% 16.6% 11.5%
one drone can be very similar. The obtained results for the average ac­
96.2% 2009 19 13 8 13 12 2 12 6 20 23 20 11 92.7%
1
3.8% 20.0% 0.2% 0.1% 0.1% 0.1% 0.1% 0.0% 0.1% 0.1% 0.2% 0.2% 0.2% 0.1% 7.3% curacy of the drone’s flight mode identification of 85.9% and 86.9% in
2
72.2% 0 447 15 29 75 0 0 0 0 0 0 1 0 78.8% 2.4 and 5.8 GHz ISM bands, respectively, are better compared to the
27.8% 0.0% 4.4% 0.1% 0.3% 0.7% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 21.2%
work in (Al-Sa’d et al., 2019) where the achieved result was 46.8%.
77.5% 0 34 512 50 45 1 1 2 3 0 0 1 2 78.6%
3
22.5% 0.0% 0.3% 5.1% 0.5% 0.4% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 21.4% Moreover, these particular results are not so essential, because in a real-
4
80.9% 0 24 57 532 30 1 0 1 0 0 0 1 0 82.4% world ADRO system it will not be necessary to detect all flight modes,
19.1% 0.0% 0.2% 0.6% 5.3% 0.3% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 17.6%
but perhaps just hovering and flying with video recording.
73.5% 1 106 30 26 488 2 0 0 0 1 1 3 0 74.2%
5
26.5% 0.0% 1.1% 0.3% 0.3% 4.9% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 25.8% There is an evident deterioration in the performance of FC-DNN
6
81.8% 0 29 17 13 9 599 29 18 35 13 9 15 8 75.4% when increasing the number of classes. This phenomenon can be
18.2% 0.0% 0.3% 0.2% 0.1% 0.1% 6.0% 0.3% 0.2% 0.3% 0.1% 0.1% 0.1% 0.1% 24.6%
Predicted Class

88.0% 0 0 0 0 1 17 579 3 15 6 10 9 6 89.6%


explained by the similarities of RF drone communications, which are in
7
12.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.2% 5.8% 0.0% 0.1% 0.1% 0.1% 0.1% 0.1% 10.4% this case all from the same manufacturer. This can be observed by
8
94.4% 0 0 3 2 0 2 2 613 5 0 1 0 1 97.5% examining the similarities in spectrograms presented in the supple­
5.6% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 6.1% 0.0% 0.0% 0.0% 0.0% 0.0% 2.5%
88.3% 0 0 0 0 0 10 15 7 571 2 7 4 7 91.7% mentary material. The aforementioned introduces a challenging
9
11.7% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.1% 0.1% 5.7% 0.0% 0.1% 0.0% 0.1% 8.3% obstacle that can be mitigated using deeper neural networks or by other
10
87.5% 0
12.5% 0.0%
0
0.0%
0
0.0%
1
0.0%
1
0.0%
4
0.0%
15
0.1%
0
0.0%
9
0.1%
578
5.8%
16
0.2%
18
0.2%
9 88.8%
0.1% 11.2%
advanced classification algorithms. This is demonstrated in this research
81.8% 0 11 20 9 3 6 8 9 13 20 559 19 19 80.3% because authors in (Al-Sa’d et al., 2019) just used three hidden layers
11
18.2% 0.0% 0.1% 0.2% 0.1% 0.0% 0.1% 0.1% 0.1% 0.1% 0.2% 5.6% 0.2% 0.2% 19.7% with ReLU activation function, as opposed to the six hidden layers with a
83.4% 0 0 3 0 5 10 15 4 7 15 32 564 28 82.6%
12
16.6% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.1% 0.0% 0.1% 0.1% 0.3% 5.6% 0.3% 17.4% combination of activation functions introduced in this algorithm.
13
88.5% 0 0 0 0 0 6 4 1 6 15 12 15 579 90.8% The proposed algorithm achieved better results compared to other
11.5% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0% 0.1% 0.1% 0.1% 0.1% 5.8% 9.2%
state-of-the-art algorithms. The proposed algorithm achieved accuracy
100.0% 66.7% 76.4% 79.4% 72.8% 89.4% 86.4% 91.5% 85.2% 86.3% 83.4% 84.2% 86.4% 85.9%
0.0% 33.3% 23.6% 20.6% 27.2% 10.6% 13.6% 8.5% 14.8% 13.7% 16.6% 15.8% 13.6% 14.1% that is in a class of prominent DL algorithms, and achieved stable results
1 2 3 4 5 6 7 8 9 10 11 12 13 in both ISM bands, with a margin which is less than ±2%. Furthermore,
Target Class
we can point out that the AlexNet (representative of CNN), the LSTM
(a) drone type and flight (operational) mode identification with 13 classes in 2.4 GHz
(representative of recurrent neural networks), and the proposed algo­
rithm achieved exceptional results. The slightly worse results achieved
85.3% 98.6% 78.2% 84.9% 84.4% 84.6% 73.0% 82.7% 87.5% 91.2% 80.4% 85.9% 86.1% 90.9%
14.7% 1.4% 21.8% 15.1% 15.6% 15.4% 27.0% 17.3% 12.5% 8.8% 19.6% 14.1% 13.9% 9.1% by CNN algorithms can be explained by the fact that some useful in­
1
98.6% 2010 1 1 3 0 0 1 1 35 4 1 6 4 97.2% formation can be loose due to the preparing images for DNN input
1.4% 20.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.3% 0.0% 0.0% 0.1% 0.0% 2.8%
78.2% 0 533 53 23 32 8 11 6 1 6 7 8 5 76.9%
because of size-reducing operation.
2
21.8% 0.0% 5.3% 0.5% 0.2% 0.3% 0.1% 0.1% 0.1% 0.0% 0.1% 0.1% 0.1% 0.0% 23.1% It is noticeable that these results are slightly better than the results in
3
84.9% 0
15.1% 0.0%
43
0.4%
555
5.5%
18
0.2%
12
0.1%
1
0.0%
1
0.0%
0
0.0%
1
0.0%
2
0.0%
2
0.0%
1
0.0%
1 87.1%
0.0% 12.9%
(Al-Sa’d et al., 2019), which testify that new records in the introduced
84.4% 0 16 15 550 34 2 1 6 1 0 1 3 5 86.8% and publicly available RF drone dataset (Sazdić-Jotić et al., 2020) have
4
15.6% 0.0% 0.2% 0.1% 5.5% 0.3% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0% 0.0% 0.0% 13.2% verified for further use. The implementation of a developed drone RF
84.6% 0 14 8 20 535 3 3 2 0 2 6 1 1 89.9%
5
15.4% 0.0% 0.1% 0.1% 0.2% 5.3% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0% 10.1% dataset demonstrates the feasibility of confident drone detection and
6
73.0% 0 45 30 29 34 588 70 33 27 33 30 12 12 62.4% identification system.
27.0% 0.0% 0.4% 0.3% 0.3% 0.3% 5.9% 0.7% 0.3% 0.3% 0.3% 0.3% 0.1% 0.1% 37.6%
Predicted Class

82.7% 0 4 1 5 5 46 555 30 12 11 0 2 1 82.6%


7
17.3% 0.0% 0.0% 0.0% 0.0% 0.0% 0.5% 5.5% 0.3% 0.1% 0.1% 0.0% 0.0% 0.0% 17.4% 5. Conclusion and future works
87.5% 0 2 3 9 10 12 17 579 14 1 2 3 1 88.7%
8
12.5% 0.0% 0.0% 0.0% 0.1% 0.1% 0.1% 0.2% 5.8% 0.1% 0.0% 0.0% 0.0% 0.0% 11.3%
91.2% 0 0 1 1 1 7 4 4 576 0 0 0 0 97.0%
The contribution of this article is creating a new RF drone dataset
9
8.8% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0% 5.7% 0.0% 0.0% 0.0% 0.0% 3.0% consisting of records from the three experiments (one experiment with
10
80.4% 0
19.6% 0.0%
3
0.0%
0
0.0%
2
0.0%
1
0.0%
1
0.0%
3
0.0%
2
0.0%
1
0.0%
487
4.8%
12
0.1%
13
0.1%
17 89.9%
0.2% 10.1%
individually operating drones and two experiments with two and three
85.9% 0 4 2 7 4 1 4 6 1 16 557 13 11 89.0% drones operating simultaneously). Such dataset will be a preliminary
11
14.1% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0% 0.1% 0.0% 0.2% 5.5% 0.1% 0.1% 11.0%
point for a practical anti-drone system because it includes the RF signals
86.1% 0 2 1 1 2 1 0 1 1 81 32 604 7 82.4%
12
13.9% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.8% 0.3% 6.0% 0.1% 17.6% of different drone types in different flight modes, so it can be used for
13
90.9% 0 3 0 2 0 0 0 0 0 27 20 4 605 91.5% testing and validation of the advanced, intelligent algorithms and can be
9.1% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.3% 0.2% 0.0% 6.0% 8.5%
adopted for researching and developing anti-drone systems with possi­
100% 79.6% 82.8% 82.1% 79.9% 87.8% 82.8% 86.4% 86.0% 72.7% 83.1% 90.1% 90.3% 86.9%
0.0% 20.4% 17.2% 17.9% 20.1% 12.2% 17.2% 13.6% 14.0% 27.3% 16.9% 9.9% 9.7% 13.1% bilities of detecting and identifying drones, and their current flight
1 2 3 4 5 6 7 8 9 10 11 12 13 mode. Furthermore, FC-DNN models were tested, verified, and proved
Target Class
that this RF drone dataset can be used for developing new, possibly more
(b) drone type and flight (operational) mode identification with 13 classes in 5.8 GHz
effective DL algorithms in the future. Experimental results showed that
Fig. 11. Confusion matrices for the third FC-DNN model, designed for the the proposed algorithm in an indoor environment can detect a single
drone’s flight mode identification. See Table 3 for the class labeling. drone with the probability of 99.8%, and identify drones with the
probability of 96.1%. The proposed algorithm provides better results
(DFT) of the signal. Such improvement was implemented with a modi­ than other state-of-the-art algorithms. Additionally, it was demonstrated
fied built-in MatLab function (pspectrum) that finds a compromise be­ that multiple drones detection is possible with the proposed algorithm
tween the spectral resolution achievable with the entire length of the with high accuracy of 97.3%, which is according to the best of our
signal and the performance limitations that result from computing large knowledge a very significant outcome. Extending this RF drone dataset
FFT. Moreover, the usage of improved FC-DNN models (deeper network and fusing it with other drone detection approaches, such as optoelec­
with a combination of different activation functions) has also contrib­ tronic images and videos, radar echoes, and acoustic recordings can
uted to the improvement of the obtained results. Accordingly, the ach­ improve the performance of the detection and identification system by
ieved results for the average accuracy of the drone’s type identification exploiting the strengths of each modality. Furthermore, it is possible to
of 96.1% and 95.7% in 2.4 and 5.8 GHz ISM bands, respectively, are explore the effect of different activation functions’ combinations
significantly better compared to the work in (Al-Sa’d et al., 2019) where together with deeper neural network structure on the performance of the
the achieved result was 84.5%. proposed FC-DNN models. The proposed methodology used in this paper
Identification of the flight modes is the least accurate. This can be performed very well during the testing phase, which was conducted
within this research, and the results suggest that it has a potential for

13
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928

96.4% 94.5% 96.2% 96.4% 98.4% 96.8% 97.4% 98.3% 93.5% 98.1%
3.6% 5.5% 3.8% 3.6% 1.6% 3.2% 2.6% 1.7% 6.5% 1.9%

94.5% 3346 373 10 2 89.7% 97.4% 3328 38 118 1 95.5%


1 1
5.5% 20.0% 2.2% 0.1% 0.0% 10.3% 2.6% 19.9% 0.2% 0.7% 0.0% 4.5%

96.2% 3 7590 103 48 98.0% 98.3% 0 7919 117 37 98.1%


Predicted Class

Predicted Class
2 2
3.8% 0.0% 45.3% 0.6% 0.3% 2.0% 1.7% 0.0% 47.3% 0.7% 0.2% 1.9%

96.4% 1 60 2563 11 97.3% 93.5% 22 51 2430 14 96.5%


3 3
3.6% 0.0% 0.4% 15.3% 0.1% 2.7% 6.5% 0.1% 0.3% 14.5% 0.1% 3.5%

98.4% 0 17 4 2619 99.2% 98.1% 0 32 15 2628 98.2%


4 4
1.6% 0.0% 0.1% 0.0% 15.6% 0.8% 1.9% 0.0% 0.2% 0.1% 15.7% 1.8%

99.9% 94.4% 95.6% 97.7% 96.2% 99.3% 98.5% 90.7% 98.1% 97.3%
0.1% 5.6% 4.4% 2.3% 3.8% 0.7% 1.5% 9.3% 1.9% 2.7%

1 2 3 4 1 2 3 4
Target Class Target Class

(a) number of detected drones with 4 classes in 2.4 GHz (b) number of detected drones with 4 classes in 5.8 GHz
Fig. 12. Average classification performance for the fourth designed FC-DNN model using confusion matrices. See Table 3 for the class labeling.

Table 5
Comparison of obtained average accuracy with the state-of-the-art ML and DL algorithms.
ALGORITHM DETECTION ACCURACY TYPE IDENTIFICATION FLIGHT MODE MULTIPLE DRONES NUMBER
ACCURACY IDENTIFICATION ACCURACY DETECTION ACCURACY

Literature Our dataset Literature Our dataset Literature Our dataset Literature Our dataset

2.4 5.8 2.4 5.8 2.4 5.8 2.4 5.8


GHz GHz GHz GHz GHz GHz GHz GHz

AlexNet – 97.1 90.0 – 94.4 85.3 – 86.0 80.3 – 97.4 71.0


ResNet-18 – 96.8 92.9 – 95.9 85.8 – 90.7 80.1 – 97.3 87.0
SqueezeNet – 96.6 83.4 – 93.1 82.4 – 87.2 78.2 – 97.1 76.4
LSTM SJB – 96.2 99.5 – 92.2 94.2 – 85.4 84.8 – 93.2 90.8
(Al-Emadi & Al-Senaid, 2020) 99.8 96.0 99.7 85.8 93.5 96.7 59.2 81.4 73.3 – 96.0 97.1
(Parlin et al., 2020) 97.3 96.3 – – – – – – – – – –
(Ezuma et al., 2020) k-Nearest – 95.1 94.1 98.1 83.3 75.2 – 70.1 62.9 – 85.5 79.5
Neighbor (kNN)
(Ezuma et al., 2020) Support Vector – 94.9 87.1 96.5 59.5 58.4 – 48.5 42.7 – 72.6 69.2
Machine
Proposed approach based on (Al-Sa’d 99.7 98.6 99.8 84.5 96.1 95.7 46.8 85.9 86.9 – 96.2 97.3
et al., 2019)

practical implementation in real case scenarios. This research can be review & editing. Boban Bondžulić: Methodology, Supervision,
extended in various ways such as: expanding the existing dataset by Writing - review & editing. Danilo Obradović: Data curation, Writing -
conducting experiments for indoor and outdoor conditions with various review & editing.
sensors (RF, audio, OES, and radar), using other types of drones where
drones speed vary and distance from the RF sensors has greatened, the Declaration of Competing Interest
effects on FC-DNN accuracy can be examined by taking into consider­
ation channel fading, noise or jamming signals, and by performing The authors declare that they have no known competing financial
different spectrum calculations. The research and development of al­ interests or personal relationships that could have appeared to influence
gorithms that include multimodal fusion will be the main objective in the work reported in this paper.
future work. The intention is to connect the proposed FC-DNN algorithm
with the LSTM algorithm to exploit data from RF and audio sensors. Acknowledgments
Moreover, the multimodal fusion implementation in the GPU environ­
ment and testing in real situations will be the ultimate ADRO research This research is conducted under the project funded by the Univer­
goal. sity of Defence in Belgrade Grant No. VA–TT/3/20–22 and the project
RABAMADRIDS funded by the Military Technical Institute (MTI) in
CRediT authorship contribution statement Belgrade. The findings achieved herein are solely the responsibility of
the authors.
Boban Sazdić-Jotić: Conceptualization, Data curation, Software,
Writing - original draft. Ivan Pokrajac: Supervision, Investigation,
Validation. Jovan Bajčetić: Visualization, Investigation, Writing -

14
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928

Appendix A. Supplementary data 1339–1347. https://proceedings.neurips.cc/paper/2009/file/


6e7b33fdea3adc80ebd648fffb665bb8-Paper.pdf.
Narkhede, P., Walambe, R., Mandaokar, S., Chandel, P., Kotecha, K., & Ghinea, G.
Supplementary data to this article can be found online at https://doi. (2021). Gas detection and identification using multimodal artificial intelligence
org/10.1016/j.eswa.2021.115928. based sensor fusion. Applied System Innovation, 4(1), 1–14. https://doi.org/10.3390/
asi4010003
Nguyen, P., Ravindranatha, M., Nguyen, A., Han, R., & Vu, T. (2016). Investigating cost-
References effective RF-based detection of drones. Proceedings of the 2nd Workshop on Micro
Aerial Vehicle Networks, Systems, and Applications for Civilian Use, October 2018,
Abeywickrama, S., Jayasinghe, L., Fu, H., Nissanka, S., & Yuen, C. (2018). RF-based 17–22. 10.1145/2935620.2935632.
Direction Finding of UAVs Using DNN. 2018 IEEE International Conference on Nick Winovich. (2021). Deep Learning. https://www.math.purdue.edu/~nwinovic/
Communication Systems (ICCS), 157–161. 10.1109/ICCS.2018.8689177. deep_learning.html.
Al-Emadi, S., & Al-Senaid, F. (2020). Drone detection approach based on radio-frequency Parlin, K., Riihonen, T., Karm, G., & Turunen, M. (2020). Jamming and classification of
using convolutional neural network. In 2020 IEEE International Conference on drones using full-duplex radios and deep learning. IEEE International Symposium on
Informatics, IoT, and Enabling Technologies (ICIoT). https://doi.org/10.1109/ Personal, Indoor and Mobile Radio Communications, PIMRC, 2020-Augus. 10.1109/
ICIoT48696.2020.9089489 PIMRC48278.2020.9217351.
Al-Sa’d, M. F., Al-Ali, A., Mohamed, A., Khattab, T., & Erbad, A. (2019). RF-based drone Patel, J., Shah, S., Thakkar, P., & Kotecha, K. (2015). Predicting stock and stock price
detection and identification using deep learning approaches: an initiative towards a index movement using Trend Deterministic Data Preparation and machine learning
large open source drone database. Future Generation Computer Systems, 100, 86–97. techniques. Expert Systems with Applications, 42(1), 259–268. https://doi.org/
https://doi.org/10.1016/j.future.2019.05.007 10.1016/j.eswa.2014.07.040
Alhadhrami, E., Al-Mufti, M., Taha, B., & Werghi, N. (2019). Learned micro-Doppler Pathak, A. R., Pandey, M., & Rautaray, S. (2018). Application of deep learning for object
representations for targets classification based on spectrogram images. IEEE Access, detection. Procedia Computer Science, 132, 1706–1717. https://doi.org/10.1016/j.
7, 139377–139387. https://doi.org/10.1109/ACCESS.2019.2943567 procs.2018.05.144
Basak, S., Rajendran, S., Pollin, S., & Scheers, B. (2021). Drone classification from RF Peacock, M., & Johnstone, M. N. (2013). Towards detection and control of civilian
fingerprints using deep residual nets. 2021 International Conference on Unmanned Aerial Vehicles. Australian Information Warfare and Security Conference,
COMmunication Systems & NETworkS (COMSNETS), 548–555. 10.1109/ June, 1–8. 10.4225/75/57a847dfbefb5.
COMSNETS51098.2021.9352891. Peng, S., Jiang, H., Wang, H., Alwageed, H., Zhou, Y., Sebdani, M. M., & Yao, Y.-D.
D-Fend. (2021). D-Fend Solutions A.D. Ltd. https://www.d-fendsolutions.com/. (2019). Modulation classification based on signal constellation diagrams and Deep
DJI. (2021). DJI. https://www.dji.com/. Learning. IEEE Transactions on Neural Networks and Learning Systems, 30(3), 718–727.
Ezuma, M., Erden, F., Kumar Anjinappa, C., Ozdemir, O., & Guvenc, I. (2020). Detection https://doi.org/10.1109/TNNLS.2018.2850703
and classification of UAVs using RF fingerprints in the presence of Wi-Fi and Sazdić-Jotić, B. M., Pokrajac, I., Bajčetić, J., Bondžulić, B. P., Joksimović, V., Šević, T., &
bluetooth interference. IEEE Open Journal of the Communications Society, 1, 60–76. Obradović, D. (2020). VTI_DroneSET_FFT. Mendeley Data. 10.17632/s6tgnnp5n2.1.
https://doi.org/10.1109/OJCOMS10.1109/OJCOMS.2019.2955889 Šević, T., Joksimović, V., Pokrajac, I., Radana, B., Sazdić-Jotić, B., & Obradović, D.
Hassanalian, M., & Abdelkefi, A. (2017). Classifications, applications, and design (2020). Interception and detection of drones used by RF-based dataset of drones.
challenges of drones: A review. Progress in Aerospace Sciences, 91(2017), 99–131. Scientific Technical Review, 70(2), 29–34.
https://doi.org/10.1016/j.paerosci.2017.04.003 Yaacoub, J.-P., Noura, H., Salman, O., & Chehab, A. (2020). Security analysis of drones
Karita, S., Watanabe, S., Iwata, T., Ogawa, A., & Delcroix, M. (2018). Semi-Supervised systems: Attacks, limitations, and recommendations. Internet of Things, 11(100218),
End-to-End Speech Recognition. Interspeech 2018, 2–6. 10.21437/ Article 100218. https://doi.org/10.1016/j.iot.2020.100218
Interspeech.2018-1746. Zha, X., Peng, H., Qin, X., Li, G., & Yang, S. (2019). A Deep Learning framework for signal
Kim, Y. (2014). Convolutional neural networks for sentence classification. In Proceedings detection and modulation classification. Sensors, 19(18), 4042. https://doi.org/
of the 2014 Conference on empirical methods in natural language processing (EMNLP) 10.3390/s19184042
(pp. 1746–1751). https://doi.org/10.3115/v1/D14-1181 Zhang, H., Cao, C., Xu, L., & Gulliver, T. A. (2018). A UAV detection algorithm based on
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep an artificial neural network. IEEE Access, 6, 24720–24728. https://doi.org/10.1109/
convolutional neural networks. Communications of the ACM, 60(6), 84–90. https:// ACCESS.2018.2831911
doi.org/10.1145/3065386 Zhang, W., & Li, G. (2018). Detection of multiple micro-drones via cadence velocity
Manessi, F., & Rozza, A. (2018). Learning Combinations of Activation Functions. diagram analysis. Electronics Letters, 54(7), 441–443. https://doi.org/10.1049/ell2.
Proceedings – International Conference on Pattern Recognition, 2018-Augus, 61–66. v54.710.1049/el.2017.4317
10.1109/ICPR.2018.8545362. Zhong, H., Chen, Z., Qin, C., Huang, Z., Zheng, V. W., Xu, T., & Chen, E. (2020). Adam
MathWorks. (2021). Deep learning for signal processing with MATLAB. https://www. revisited: A weighted past gradients perspective. Frontiers of Computer Science, 14(5),
mathworks.com/campaigns/offers/deep-learning-for-signal-processing-white-paper. Article 145309. https://doi.org/10.1007/s11704-019-8457-x
html?elqCampaignId=10588. Zhou, S., Yin, Z., Wu, Z., Chen, Y., Zhao, N., & Yang, Z. (2019). A robust modulation
Mitka, E., & Mouroutsos, S. G. (2017). Classification of drones. American Journal of classification method using convolutional neural networks. EURASIP Journal on
Enginnering Research (AJER), 6(7), 36–41. http://www.ajer.org/papers/v6(07)/F060 Advances in Signal Processing, 2019(1), 21. https://doi.org/10.1186/s13634-019-
73641.pdf. 0616-6
Nair, V., & Hinton, G. E. (2009). 3D object recognition with deep belief nets. Advances in
Neural Information Processing Systems 22 - Proceedings of the 2009 Conference, 22,

15

You might also like