Single and Multiple Drones Detection and Identification Using RF Based Deep
Single and Multiple Drones Detection and Identification Using RF Based Deep
Single and Multiple Drones Detection and Identification Using RF Based Deep
Single and multiple drones detection and identification using RF based deep
learning algorithm
Boban Sazdić-Jotić a, *, Ivan Pokrajac b, Jovan Bajčetić a, Boban Bondžulić a, Danilo Obradović a
a
Military Academy, University of Defence in Belgrade, Veljka Lukića Kurjaka 33, Belgrade, Serbia
b
Military Technical Institute, Ratka Resanovića 1, Belgrade, Serbia
A R T I C L E I N F O A B S T R A C T
Keywords: Unmanned aerial systems, especially drones have gone through remarkable improvement and expansion in
Anti-drone system recent years. Drones have been widely utilized in many applications and scenarios, due to their low price and
Classification ease of use. However, in some applications drones can pose a malicious threat. To diminish risks to public se
Deep learning algorithms
curity and personal privacy, it is necessary to deploy an effective and affordable anti-drone system in sensitive
Detection
Drone
areas to detect, localize, identify, and defend against intruding malicious drones. This research article presents a
Multiple drones new publicly available radio frequency drone dataset and investigates detection and identification methodologies
to detect single or multiple drones and identify a single detected drone’s type. Moreover, special attention in this
paper has been underlined to examine the possibility of using deep learning algorithms, particularly fully con
nected deep neural networks as an anti-drone solution within two different radio frequency bands. We proposed
a supervised deep learning algorithm with fully-connected deep neural network models that use raw drone
signals rather than features. Regarding the research results, the proposed algorithm shows a lot of potentials. The
probability of detecting a single drone is 99.8%, and the probability of type identification is 96.1%. Moreover,
the results of multiple drones detection demonstrate an average accuracy of 97.3%. There have not been such
comprehensive publications, to this time, in the open literature that have presented and enlightened the problem
of multiple drones detection in the radio frequency domain.
* Corresponding author.
E-mail addresses: [email protected] (B. Sazdić-Jotić), [email protected] (I. Pokrajac), [email protected] (D. Obradović).
https://doi.org/10.1016/j.eswa.2021.115928
Received 27 December 2020; Received in revised form 4 July 2021; Accepted 16 September 2021
Available online 23 September 2021
0957-4174/© 2021 Elsevier Ltd. All rights reserved.
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928
identification based on deep neural networks (DNN) which are used in (DOA) are applicable with certain restrictions (e.g. multipath and non
(Al-Sa’d et al., 2019). Furthermore, the ADRO system for multiple –line–of–sight propagation). Challenges in RF drone detection (ambient
drones detection was created and its recognition accuracy was verified noise or RF background, multipath, etc.) can be an invincible obstacle in
on this RF drone dataset. some cases causing a large false alarm rate.
One of the main challenges was to create an appropriate algorithm
for multiple drones detection. Moreover, it was imperative to label 2.2. Deep learning (DL) algorithms
training data accurately and adjust training parameters for the best re
sults. The key advantage of the proposed algorithm is a high accuracy, A respectable ADRO system should have several different types of
regardless of using a non-complex technique (Short-Time Fourier sensors, i.e. it should be composed of heterogeneous sensor units com
transform, STFT) for preprocessing RF signals. In this manner, the pro bined to find application in practice. Such a system, on the other hand,
posed algorithm performs better than other prominent deep learning represents a compromise (tradeoff) between well-timed detection, long
(DL) algorithms because, after STFT calculation, the data is ready for the detection range, high detection probability, and sensor imperfections. It
training process, without any additional operations. The most critical should be noted that the performance of any ADRO system depends on
phase of the proposed algorithm is the preparation of the training data numerous factors: properties of the target (drone dimensions, speed,
because it is time-consuming. Although we have obtained better results communication, and navigational system), the surveillance environ
than the authors in (Al-Sa’d et al., 2019), there is still a need to increase ments (RF traffic density, urban or rural areas, and atmospheric condi
the recognition rate of the flight modes of drones that are made by the tions), the hardware parameters (receiver sensitivity, antenna
same manufacturer. characteristics, OES sensor quality, antenna azimuth, and elevation
The rest of the paper is organized as follows: section 2 is an overview directivity) and the corresponding algorithms. Using expensive sensors,
of the related works in the area of ADRO studies, section 3 describes the or combining multiple heterogeneous sensors is not as effective for the
system model of the proposed algorithm and the experiments based on detection and identification of drones, as is the usage of a good algo
it, in section 4 the results and discussions are presented, and finally, the rithm. DL algorithms have demonstrated excellent results when applied
conclusion and future works are given in section 5. to different types of problems such as the image object detection and
identification in (Krizhevsky et al., 2017; Nair & Hinton, 2009; Pathak
2. Related works et al., 2018), digital signal processing (Peng et al., 2019; Zha et al., 2019;
Zhou et al., 2019), radar detection and identification in (Alhadhrami
In this section, recent RF based ADRO approaches used for detection et al., 2019), speech and text recognition in (Karita et al., 2018; Y. Kim,
and identification of intruding drones are reviewed. Moreover, the state- 2014), and in all other areas of everyday life. Furthermore, multimodal
of-the-art DL algorithms are introduced and their implementation in DL algorithms presented in (Narkhede et al., 2021; Patel et al., 2015)
ADRO systems is considered. were presented as novel approaches and implementation for sensor
fusion in various applications.
2.1. Radio frequency drone detection DL algorithms have also found their application in exploiting various
data from different sensors to detect and identify drones. Most of these
RF drone detection is specific in several ways compared to other studies include a mandatory step where frequency or time–frequency
drone detection methods. First of all, radar, audio, and optoelectronic representation is calculated and saved to an image that is later used as
sensors (OES) detect and collect well-known features just from a drone input data for existing DL algorithms already proved in object detection
(an autonomous aircraft), while RF sensors perform monitoring of the and identification problems. However, there is a small number of related
UAS’s radio communication links prearranged between the two partic research papers where raw RF data are used as a solution (MathWorks,
ipants: a drone and the corresponding ground control station (the flight 2021). Instead of using an image for DNN input data, rudimentary RF
controller operating by the drone pilot). Second, RF based drone signal transformation is performed and the output is then used as a DNN
detection must collect features over a wide frequency range to detect a input for RF based DL algorithm.
radio communication of UAS. Finally, in a real RF environment, the A comparative analysis of the most recent and prominent studies in
existence of many other radio signals (e.g. Wi-Fi or Bluetooth) sharing the field of detection and identification of drones based on the DL al
the same frequency band with the UAS, makes RF based detection quite gorithms is shown in Table 1. The analysis was performed on the ob
challenging. In (Peacock & Johnstone, 2013), identifying the media tained results as well as the challenges, benefits, and disadvantages of
access control (MAC) address of a drone is presented as a feasible al the used algorithms.
gorithm. However, this algorithm is only capable of detecting drones It is important to note that the RF detection and identification of the
with open MAC addresses, because it can be easily spoofed and can UAS (drones and flight controllers) by using state-of-the-art DL algo
provide diverse interpretations. In addition to this, a huge problem can rithms is the primary objective of all studies that are presented in
be to create and update a comprehensive dataset containing MAC ad Table 1. Additionally, the identification of the drone flight modes is
dresses of all drones, because there is an ever-increasing variety of examined only in (Al-Emadi & Al-Senaid, 2020; Al-Sa’d et al., 2019) and
drones. Some commercial ADRO systems exploit the knowledge of the in this paper. More importantly, all authors used RF signals from 2.4
communication protocols to detect, identify, locate and in some cases GHz ISM frequency band for their studies and no one paper presents the
hijack (take over) the drone to land it at a predefined location (D-Fend, 5.8 GHz ISM band research results. Another interesting fact is that only
2021). An improved solution is the usage of the radio signal’s features authors in (Abeywickrama et al., 2018; Zhang et al., 2018) investigated
for drone detection. Based on this, the authors in (Nguyen et al., 2016) scenarios in the outdoor environment. Most of the authors used FFT of
proposed a drone detection algorithm based on specific signatures of a raw RF signal (i.e. spectrum matrix) or spectrogram images as DNN
drone’s body vibration and body shifting that are embedded in the Wi-Fi input, except in (Zhang et al., 2018). Furthermore, in (Basak et al.,
signal transmitted by a drone. Similarly, RF drone fingerprints (statis 2021), the authors investigated the impact of additive white Gaussian
tical features of a radio signal) with machine learning (ML) algorithms noise (AWGN) and multipath propagation on the DL algorithms accu
are presented in (Ezuma et al., 2020) for the same objective. However, racy. It is also interesting that in the same work, the authors examined
different techniques like multistage classification, prior RF signal the possibility of detecting multiple drones, but with simulated data.
detection, noise removal, or multiresolution analysis were used in this They used previously recorded RF signals (not overlapping in frequency
research before ML algorithms to improve detection results. Addition spectrum) from flight controllers, then they artificially summed those
ally, drone localization algorithms based on measurements of received signals and created DNN input for multiple drones detection scenarios.
signal strength (RSS), time of arrival (TOA), and direction of arrival In this paper, the power RF spectrum of the raw radio signal is
2
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928
3
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928
Fig. 1. System model: (1) Equipment Under Test (EUT), (2) RF sensor with antennas, (3) RF drone dataset, (4) RF drone dataset verification, (5) RF signal pre
processing and labeling, (6) FC-DNN models, and (7) the system output.
development, which is also described in detail in (Šević et al., 2020), Spectrum Analyzer, two receiving antennas (for two separate ISM
consists of the RF sensor and equipment under test (EUT). For the pur bands) with corresponding cables and connectors were used. The Real-
pose of the data acquisition and recording, the Tektronix Real-Time Time Spectrum Analyzer instantaneously recorded bandwidth of 110
Fig. 2. RF sensor and EUT: a) Tektronix Real-Time Spectrum Analyzer RSA 6120A, b) receiving antennas, c) DJI Phantom IV Pro, d) DJI Mavic 2 Enterprise, and e)
DJI Mavic 2 Zoom.
4
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928
MHz within 2.4 or 5.8 GHz ISM bands and saved records directly in a *. objective of the verification stage of the RF drone dataset was to check
mat format that is suitable for loading and analyzing in the MatLab out if it is possible to visually differentiate types of drones and types of
application. It is important to notice that the acquisition length of each flight modes in the calculated spectrograms. The secondary objective
RF signal was 450 ms and the sampling frequency was 150 MSample/s was to determine the elementary physical characteristics of the RF drone
for instantaneous bandwidth of 110 MHz, which produces a *.mat file of signals such as signal type (fixed frequency signal, frequency hopping
around 500 MB for every recording in the experiment. Each saved file signal, signal with direct sequence or burst), total channel number,
also contains additional information (metadata) about experiment pa channel central frequency, channel bandwidth, total channels (occupied
rameters that can be used after importing into the MatLab application. bandwidth), channel raster (frequency distance between channels), hop
duration and dwell time for each drone’s recording (see supplementary
3.1.1. Equipment under test (EUT) material). As a result, all three types of drones and their operational
For the EUT, three different UAS (DJI Phantom IV Pro, DJI Mavic 2 modes were successfully differentiated. These results were not used as
Zoom, and DJI Mavic 2 Enterprise with the corresponding flight con an input for FC-DNN models, but just for checking out the consistency of
trollers) were used (DJI, 2021). Fig. 2 shows Tektronix Real-Time the RF drone dataset.
Spectrum Analyzer RSA 6120A with two receiving antennas (for 2.4 Examples of spectrograms calculated from the recorded RF activities
and 5.8 GHz ISM bands) and EUT (DJI Phantom IV Pro, DJI Mavic 2 in 2.4 GHz ISM band are shown in the following figures. In the begin
Enterprise, and DJI Mavic 2 Zoom, respectively from left to right). ning, as an illustration, Fig. 3 provides a detailed explanation of all the
A drone pilot uses the flight controller to send RF commands to components on the spectrogram of the RF drone signal, to better un
operate the autonomous aircraft by changing flight (operational) modes, derstand the basic method of the drone operation.
altitude (position), and speed. Most drones can operate in 2.4 or 5.8 GHz Two distinguished components can be seen in Fig. 3: the uplink for
ISM bands, usually in one or simultaneously in both, when communi command-and-control signals and the downlink for the video signal. The
cation is disrupted, automatically or manually adjusted via flight uplink for command-and-control signals is marked with black circles,
controller. while the downlink for the video signal is marked with a blue rectangle.
It is unambiguous that the downlink is a fixed frequency emission (the
3.2. RF dataset development subsystem central frequency does not change during the operation) and the uplink
is a frequency hopping emission (the central frequency changes ac
Data acquisition was performed for each drone separately, and each cording to a predefined rule during the operation). All spectrograms of
time four distinctive flight (operational) modes were recorded. In order all drones that were part of the experiments are shown in the supple
to analyze the whole radio communication traffic, each data acquisition mentary material.
process was organized into five steps: Further, Fig. 4 shows the spectrograms of one drone with four
EUT is off. Drone is turned off. RF background (ambient noise) is distinctive flight modes, Fig. 5 illustrates spectrograms of a single mode
recorded. For a more genuine approach, random Wi-Fi and Bluetooth of operation for different drones, and finally, Fig. 6 presents snapshots of
radio communications at the beginning were induced. the situation when multiple drones (two and three) operate
EUT is on and performing the connecting procedure with the simultaneously.
flight controller. Drone is turned on by the operator. Drone is con Moreover, several additional facts were established which can be of
necting to the flight controller. The recording is performed until the interest in further studies: all three drones operate in a designated fre
drone is connected to the flight controller. quency range which is defined by DJI, all three drones use the spread
EUT is hovering. The operator lifts off the drone and puts it in a state spectrum (SS) technique based on frequency hopping (FH) for commu
of hovering (the drone is flying without altering altitude and position, i. nication between drone and flight controller, and the drone’s FH emis
e., the operator is not giving any commands). The recording is per sion is very simple and comparable to sweep signals. Also, it is
formed while the drone is hovering (maintaining height and position) interesting to note that DJI Phantom IV Pro has the same principle of FH
without any operator commands. emission in all operational (flight) modes (see supplementary material).
EUT is flying. The operator issues some basic commands while the
drone is moving left, right, down, and up. The recording is performed 3.3. Drone detection and identification subsystem
while the drone is flying (the drone is changing the altitude and position
all the time) following the commands from the operator. The second part of the system model – the drone’s detection and
EUT is flying and recording a video. The operator enables a video identification subsystem, remained similar as in (Al-Sa’d et al., 2019)
recording on the drone and issues some basic commands while the drone and the three FC-DNN models were used to verify the consistency of the
is moving left, right, down, and up. The recording is performed while the new RF drone dataset (on the subject of drone detection, drone identi
drone is flying and the video is being transmitted and recorded to the fication, and drone type and flight mode identification). The additional,
flight controller. fourth FC-DNN model for multiple drones detection is the crucial dif
This step-by-step procedure was done for all drones and constitute ference introduced in this paper. Also, there were made slight changes in
one experiment. Firstly, three experiments were executed, one with each the data labeling procedure to validate the possibility to detect situa
drone, one with two drones, and one with three drones, with 25 re tions when two or three drones operate concurrently.
cordings in total (15 recordings for the first experiment, 5 recordings for
the second, and 5 recordings for the third experiments) in 2.4 GHz ISM 3.3.1. Signal preprocessing
band. Then, the whole procedure was repeated for 5.8 GHz ISM band, The custom-made MatLab functions were used to perform signal
with another 25 recordings (or 50 recordings in total). Again, it is preprocessing and labeling steps, required for necessary data prepara
important to point out that each experiment was conducted in labora tion. Such data were intended to be used as an input to FC-DNN models.
tory (indoor) conditions where the RF background recording was firstly In order to preprocess and prepare raw data obtained from the first part
executed. of the system model, signal segmentation and simple calculation of the
The final stage of the RF dataset development subsystem was to power RF spectrum were performed for each segment of signals in both
perform time–frequency analysis (TFA) in the MatLab application over- ISM bands. The signal segmentation was performed by dividing the
collected raw RF drone signals to verify the RF drone dataset (see Fig. 1). whole acquired RF signal into snapshots of data consist of 100.000
MatLab embedded spectrogram function based on Short–Time Fourier samples. This process was performed to speed up signal preprocessing
Transform (STFT) was used as one of TFA’s basic tools before the drone and to perform data augmentation because each segment of each RF
detection and identification subsystem was engaged. The primary signal was used as an FC-DNN input. It is important to emphasize that
5
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928
simple signal segmentation without overlapping windows and without limitations that result from computing large FFT (MathWorks, 2021).
discarding noisy segments (segments without useful signal) was used in Additionally, data scaling of FC-DNN inputs as a recommended
this research. Moreover, data augmentation and accuracy of the FC-DNN preprocessing step was performed by using the normalization technique
model can be improved by using an overlapping window for signal (to rescale input variables before training a neural network model in the
segmentation, as well as with discarding segments where there is only range of zero to one).
noise (e.g. between two hops). For power RF spectrum calculation, a Subsequently, data aggregation (preprocessed and labeled RF signals
modified built-in MatLab function (pspectrum) was used with 2048 from all experiments) was performed and the results are stored in four
frequency bins without the DC component of the RF signal (zero mean matrices (two matrices for 2.4 GHz and two for 5.8 GHz ISM bands
option). This function finds a compromise between the spectral resolu representing the power RF spectrum). FC-DNN’s input data specification
tion achievable with the entire length of the signal and the performance is presented in detail in Table 2.
6
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928
(a) two drones during flying and video recording mode (b) three drones during flying and video recording mode
Fig. 6. RF spectrograms when multiple drones (two and three) operate simultaneously in 2.4 GHz ISM band.
Table 2
FC-DNN’s input data specifications.
Experiment No.1 Experiment No.2 Experiment No.3
(single drone) (two drones) (three drones)
7
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928
It must be mentioned that each concatenated matrix from the first Table 3
experiment was used as an input in the first three DNN models for Specification of RF drone dataset for one ISM band.
solving detection and identification problems. Additionally, all concat FC- Class name Class Signal Segments Ratio
enated matrices from all the experiments were used as an input in the DNN label No. No. [%]
fourth FC-DNN model for solving the multiple drones detection problem. model (100000
No. samples)
8
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928
The final output from this shallow fully-connected neural network, z is layers are consist of 256, 128, and 64 neurons, respectively. Comparing
the sum of all results obtained from the hidden layer, thus it is presented to the original FC-DNN model introduced in (Al-Sa’d et al., 2019), which
as: had only three fully-connected hidden layers with ReLU activation
( ) function, this is an enhancement. This can be elicited by Fig. 7 that
∑ (z)
z=f ωl yl + b (z)
(3) shows an example of the fourth FC-DNN model used for multiple drones
l detection.
The input layer of the fourth FC-DNN model is the size of the power
where l is the number of hidden layers and b(z) refers to weights and RF spectrum calculated with 2048 frequency bins. Next, hidden layers
biases values from the corresponding layer. Using matrix notation these organized in three sets of hidden layers are engaged. Although unusual,
equations can be expressed more concisely. For example, underlying a combination of two activation functions was used for each set of
mathematical relations for FC-DNN with two hidden layers are shown hidden layers in the FC-DNN model. The sigmoid function is affected by
with the following equations: saturation issues which are explained in (Manessi & Rozza, 2018), so the
( ) ReLU function is engaged to overcome such weakness and improve the
y(1) = f W(1) x + b(1) (4)
accuracy results of FC-DNN. Finally, the output layer of the fourth FC-
( ) DNN is the fully-connected output layer of four neurons with the Soft
y(2) = f W(2) y(1) + b(2) (5) Max activation function. For training and validation process following
( ) FC-DNN parameters during experiments were used: stochastic gradient
z = f W (z) y(2) + b(z) (6) descent (SGD) optimization algorithm with backpropagation for the
Equations (4), (5), and (6) denote the results obtained from the first, error minimization that uses a training dataset to update a model, Adam
the second, and the output layer, respectively. optimizer (Zhong et al., 2020) for the classification mean square error
Accordingly, the proposed FC-DNN models can be described with the minimization, stratified K–fold cross-validation (K = 10) for the bias
following input–output relations (Al-Sa’d et al., 2019): minimization (to overcome the difference between classes); hyper
parameter of SGD that controls the number of training samples (batch
z(l) (l) (l) (l−
i = f (W zi
1)
+ b(l) ) (7) size = 20), and hyperparameter of SGD that controls the number of
complete passes through the training dataset (total number of epochs =
where i is the number of input RF segment; z(0)
i = yi is the power 100).
(l− 1)
spectrum of i-th input RF segment; zi is the output of the layer l − 1
and the input to the layer l; z(l)
i is the output of the layer l and the input to
3.4. Implementation
the layer l + 1; z(l)
i = ci is the classification vector for i-th input RF
[ ]T
segment; b = b1 , b(l)
(l) (l)
2 , ⋯, bH(l)
(l)
is the bias vector of layer l; f (l) is the For implementation purposes, the proposed FC-DNN models with
Python Anaconda version 1.9.2 with Tensorflow 2.1.0 (including Keras
activation function of layer l (l = 1, 2, ⋯, L and L − 1 is the total number
2.3.0) framework and GPU environment setup were used. The host
of hidden layers). Also, the weight matrix of layer l is designated asW(l) :
machine for this purpose was Intel(R) Core (TM) i5–9400F CPU @ 2.90
⎡ ⎤
GHz, 32 GB RAM with two GPUs GeForce RTX 2060 6 GB GDDR6 (CUDA
w(l) ⋯ w(l)
⎢ 11 1H (l− 1) ⎥
toolkit version 10.1. and cuDNN version 7.6). Existing FC-DNNs were
W (l) = ⎢
⎣ ⋮ ⋱ ⋮ ⎥ (8)
⎦ modified according to the specification of the RF drone dataset and
w(l)
H (l) 1
⋯ w(l)
H (l) H (l− 1) created four FC-DNNs using Keras to perform the following tasks: to
detect the presence of a drone, to detect the presence of a drone and
where wij is the weight between the ith neuron of layer k and jth neuron identify its type, to detect the presence of a drone, identify its type, and
(l)
9
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928
determine its flight mode, and lastly, to detect the presence of a drone was divided into several smaller ones (detection, type identification,
and identify the number of drones. Also, real-time testing was performed flight mode identification, and drone number detection), which were
with the proposed algorithm. The average computing time of the pro solved using four separate FC-DNN models.
posed system workflow was measured for each FC-DNN model and
presented in Table 4. 4. Results and discussions
It should be noted that the average time required to execute the
proposed algorithm on the host machine was obtained through a The main goal of this research was to create a new RF drone dataset
simulation with 100 segments from three different newly captured RF and to analyze the application possibilities of the RF based DL algo
signals. The obtained times for the classification purposes are similar for rithms in drone detection and identification. In addition, the results of
all FC-DNN models, but the time necessary for the preprocessing stage is the multiple drones detection and identification system are presented
almost 5 times bigger. The results from Table 4 show that it is possible to and discussed.
detect/identify the drone from the received RF signal within only 0.31 s Performance assessment of the RF based ADRO system is represented
(sum of the first and one of the rest columns). It should be emphasized with accuracy, precision, recall, error, false discovery rate (FDR), false-
that this is a respectable outcome even though the preprocessing stage negative rate (FNR), and F1 scores via appropriate confusion matrices
was not implemented on the GPU platform, rather only trained FC-DNN (Al-Sa’d et al., 2019). To better understand the performance of FC-DNN
models. Based on all the above, a workflow graphic representation of the models in such a way, an example of a confusion matrix for two classes
proposed algorithm is given in Fig. 8. with an explanation of corresponding rows, columns, and cells is pre
Workflow graphic representation represents a detailed description of sented in Fig. 9.
the drone detection and identification subsystem consisted of FC-DNN Next, in Figs. 10 and 11, the overall results of the performance
data preparation, training, and real-time classification on a pre-trained assessment of the RF based drone’s detection and identification system
model. FC-DNN data preparation is the first phase of the workflow of for both ISM bands are presented. This is convenient because it is easy to
the proposed algorithm and can be defined as the following step-by-step compare the results of detection and identification of drones in 2.4 or
procedure: loading data from the RF drone dataset, signal segmentation, 5.8 GHz ISM bands. Also, TensorFlow tracking and visualizing metrics
spectrum calculation, aggregation of data, and data labeling. Labeled such as loss and accuracy during the training process were presented in
data are afterward handled by FC-DNN models for the training process the form of graphs (see the supplementary material).
and the trained models are finally obtained and saved for the real-time First of all, Fig. 10 (a) and (b) shows the classification performance of
simulation of drone detection and identification. It should be noted that the first FC-DNN model which detects the presence of a drone in 2.4 and
four separate FC-DNN models were intentionally used for training and 5.8 GHz ISM bands, respectively. The results present an average accu
testing phases. The main reason for such odd implementation is to racy of 98.6% and an average F1 score of 97.8% for 2.4 GHz ISM band,
satisfy the demand of the ADRO system’s tactical demands. The request and an average accuracy of 99.8% and an average F1 score of 99.6% for
was to develop independent classifiers for single and multiple drones 5.8 GHz ISM band. The absolute error of the average accuracy between
detection and identification. Because of that, the introduced problem the ISM bands for the first DNN model is 1.2%.
10
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928
first paper that has presented and explained detection and identification
Averaged in both ISM bands. More importantly, a new FC-DNN model was con
F1 F1 scores F1 scores structed, and thereafter its performance was tested with this new RF
scores drone dataset. The created system model is showing respectable results
in multiple drones detection, which is also unique research especially in
PREDICTED CLASS
11
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928
Predicted Class
1 1
3.4% 20.0% 1.4% 6.6% 0.6% 20.0% 0.2% 1.2%
1 2 1 2
Target Class Target Class
(a) drone detection with 2 classes in 2.4 GHz (b) drone detection with 2 classes in 5.8 GHz
96.0% 93.3% 96.5% 97.1% 97.0% 95.8% 97.1% 95.8% 93.7% 96.6%
4.0% 6.7% 3.5% 2.9% 3.0% 4.2% 2.9% 4.2% 6.3% 3.4%
Predicted Class
2 2
3.5% 0.0% 25.1% 0.2% 0.0% 0.9% 4.2% 0.0% 25.7% 0.7% 0.5% 4.6%
99.9% 94.1% 95.9% 95.4% 96.1% 96.1% 96.2% 96.0% 94.6% 95.7%
0.1% 5.9% 4.1% 4.6% 3.9% 3.9% 3.8% 4.0% 5.4% 4.3%
1 2 3 4 1 2 3 4
Target Class Target Class
(c) drone type identification with 4 classes in 2.4 GHz (d) drone type identification with 4 classes in 5.8 GHz
Fig. 10. Confusion matrices for the first two FC-DNN models, designed for drone detection and drone type identification. See Table 3 for the class labeling.
The obtained result of 98.6% for the average accuracy of the drone both ISM bands. This phenomenon can be explained with the following
detection in 2.4 GHz ISM band is marginally worse compared to the evidence: spectrograms for RF background and RF drone radio com
work in (Al-Sa’d et al., 2019) where the achieved result was 99.7%. The munications are quite dissimilar, and it is easy to visually distinguish
main reason for this occurrence lies in the fact that a more representative different RF activities in spectrogram when two or three drones operate
RF background (ambient noise) was used in this research (simulated simultaneously. This result is an outstanding outcome of this research,
Bluetooth and Wi-Fi signals during the first step of each experiment that and more importantly, it is independent of the ISM bands that are
were used). Furthermore, in the process of dividing the whole consid observed. The obtained results for the average accuracy of the drone
ered RF signal into snapshots of data consisting of 100.000 samples (RF number detection of 96.2% and 97.3% in 2.4 and 5.8 GHz ISM bands,
signal segmentation), the segments that do not contain a useful signal respectively, are better compared with the work in (W. Zhang & Li,
were not discarded. However, the obtained result of 99.8% for the 2018) where the achieved result was 94.2% but for the radar sensor.
average accuracy of drone detection in 5.8 GHz ISM band is better Also, these results are an excellent basis for possible research on the
compared to the work in (Al-Sa’d et al., 2019). This is an expected result application of DL algorithms in the detection of drone swarming.
because the 5.8 GHz ISM band tends to be less crowded than the 2.4 GHz Detection of the drone’s type is considerably improved in compari
ISM band since fewer devices use it and because it has more allocated son to similar studies. This resulted from the fact that the signal pre
channels. processing step is enhanced by using power spectrum calculation
The average accuracy of the drone number detection is very high in (spectral energy distribution), instead of a discrete Fourier transform
12
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928
84.2% 96.2% 72.2% 77.5% 80.9% 73.5% 81.8% 88.0% 94.4% 88.3% 87.5% 81.8% 83.4% 88.5% attributed to the fact that spectrograms for different flight modes from
15.8% 3.8% 27.8% 22.5% 19.1% 26.5% 18.2% 12.0% 5.6% 11.7% 12.5% 18.2% 16.6% 11.5%
one drone can be very similar. The obtained results for the average ac
96.2% 2009 19 13 8 13 12 2 12 6 20 23 20 11 92.7%
1
3.8% 20.0% 0.2% 0.1% 0.1% 0.1% 0.1% 0.0% 0.1% 0.1% 0.2% 0.2% 0.2% 0.1% 7.3% curacy of the drone’s flight mode identification of 85.9% and 86.9% in
2
72.2% 0 447 15 29 75 0 0 0 0 0 0 1 0 78.8% 2.4 and 5.8 GHz ISM bands, respectively, are better compared to the
27.8% 0.0% 4.4% 0.1% 0.3% 0.7% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 21.2%
work in (Al-Sa’d et al., 2019) where the achieved result was 46.8%.
77.5% 0 34 512 50 45 1 1 2 3 0 0 1 2 78.6%
3
22.5% 0.0% 0.3% 5.1% 0.5% 0.4% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 21.4% Moreover, these particular results are not so essential, because in a real-
4
80.9% 0 24 57 532 30 1 0 1 0 0 0 1 0 82.4% world ADRO system it will not be necessary to detect all flight modes,
19.1% 0.0% 0.2% 0.6% 5.3% 0.3% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 17.6%
but perhaps just hovering and flying with video recording.
73.5% 1 106 30 26 488 2 0 0 0 1 1 3 0 74.2%
5
26.5% 0.0% 1.1% 0.3% 0.3% 4.9% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 25.8% There is an evident deterioration in the performance of FC-DNN
6
81.8% 0 29 17 13 9 599 29 18 35 13 9 15 8 75.4% when increasing the number of classes. This phenomenon can be
18.2% 0.0% 0.3% 0.2% 0.1% 0.1% 6.0% 0.3% 0.2% 0.3% 0.1% 0.1% 0.1% 0.1% 24.6%
Predicted Class
13
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928
96.4% 94.5% 96.2% 96.4% 98.4% 96.8% 97.4% 98.3% 93.5% 98.1%
3.6% 5.5% 3.8% 3.6% 1.6% 3.2% 2.6% 1.7% 6.5% 1.9%
Predicted Class
2 2
3.8% 0.0% 45.3% 0.6% 0.3% 2.0% 1.7% 0.0% 47.3% 0.7% 0.2% 1.9%
99.9% 94.4% 95.6% 97.7% 96.2% 99.3% 98.5% 90.7% 98.1% 97.3%
0.1% 5.6% 4.4% 2.3% 3.8% 0.7% 1.5% 9.3% 1.9% 2.7%
1 2 3 4 1 2 3 4
Target Class Target Class
(a) number of detected drones with 4 classes in 2.4 GHz (b) number of detected drones with 4 classes in 5.8 GHz
Fig. 12. Average classification performance for the fourth designed FC-DNN model using confusion matrices. See Table 3 for the class labeling.
Table 5
Comparison of obtained average accuracy with the state-of-the-art ML and DL algorithms.
ALGORITHM DETECTION ACCURACY TYPE IDENTIFICATION FLIGHT MODE MULTIPLE DRONES NUMBER
ACCURACY IDENTIFICATION ACCURACY DETECTION ACCURACY
Literature Our dataset Literature Our dataset Literature Our dataset Literature Our dataset
practical implementation in real case scenarios. This research can be review & editing. Boban Bondžulić: Methodology, Supervision,
extended in various ways such as: expanding the existing dataset by Writing - review & editing. Danilo Obradović: Data curation, Writing -
conducting experiments for indoor and outdoor conditions with various review & editing.
sensors (RF, audio, OES, and radar), using other types of drones where
drones speed vary and distance from the RF sensors has greatened, the Declaration of Competing Interest
effects on FC-DNN accuracy can be examined by taking into consider
ation channel fading, noise or jamming signals, and by performing The authors declare that they have no known competing financial
different spectrum calculations. The research and development of al interests or personal relationships that could have appeared to influence
gorithms that include multimodal fusion will be the main objective in the work reported in this paper.
future work. The intention is to connect the proposed FC-DNN algorithm
with the LSTM algorithm to exploit data from RF and audio sensors. Acknowledgments
Moreover, the multimodal fusion implementation in the GPU environ
ment and testing in real situations will be the ultimate ADRO research This research is conducted under the project funded by the Univer
goal. sity of Defence in Belgrade Grant No. VA–TT/3/20–22 and the project
RABAMADRIDS funded by the Military Technical Institute (MTI) in
CRediT authorship contribution statement Belgrade. The findings achieved herein are solely the responsibility of
the authors.
Boban Sazdić-Jotić: Conceptualization, Data curation, Software,
Writing - original draft. Ivan Pokrajac: Supervision, Investigation,
Validation. Jovan Bajčetić: Visualization, Investigation, Writing -
14
B. Sazdić-Jotić et al. Expert Systems With Applications 187 (2022) 115928
15