251 - Combined - RF-Based - Drone - Detection - and - Classification

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, VOL. 8, NO.

1, MARCH 2022 111

Combined RF-Based Drone Detection and


Classification
Sanjoy Basak , Sreeraj Rajendran , Sofie Pollin , Senior Member, IEEE, and Bart Scheers

Abstract—Despite several beneficial applications, unfortu- public privacy, drug trafficking, firearm smuggling, bombing,
nately, drones are also being used for illicit activities such as drug and invading security-sensitive places like airports and nuclear
trafficking, firearm smuggling or to impose threats to security- power plants.
sensitive places like airports and nuclear power plants. The
existing drone localization and neutralization technologies work Several Counter Unmanned Aircraft Systems (C-UAS) have
on the assumption that the drone has already been detected and been proposed to disable the attack from a drone, which are
classified. Although we have observed a tremendous advance- mainly divided into two categories: hard and soft interception
ment in the sensor industry in this decade, there is no robust (kinetic or non-kinetic solution). The kinetic solutions include
drone detection and classification method proposed in the litera- intercepting a drone using (i) a trained bird of prey (ii) a net
ture yet. This paper focuses on radio frequency (RF) based drone
detection and classification using the frequency signature of the gun [1] (iii) a laser beam, and (iv) a firearm. The non-kinetic
transmitted signal. We have created a novel drone RF dataset solutions include: (i) GPS spoofing [1] to deceive a drone’s
using commercial drones and presented a detailed comparison localization system and (ii) RF jamming. Irrespective of the
between a two-stage and combined detection and classifica- chosen solution for any environment, the presence of a drone
tion framework. The detection and classification performance should be detected and classified beforehand.
of both frameworks are presented for a single-signal and simul-
taneous multi-signal scenario. With detailed analysis, we show Detecting and classifying a drone automatically is a chal-
that You Only Look Once (YOLO) framework provides better lenging task. Some popular technological approaches to detect
detection performance compared to the Goodness-of-Fit (GoF) and classify a drone include: (i) Radar detection, (ii) Video
spectrum sensing for a simultaneous multi-signal scenario and detection, (iii) Acoustic detection, and (iv) RF-based detection.
good classification performance comparable to Deep Residual A comprehensive literature review on the current SoA Machine
Neural Network (DRNN) framework.
Learning-based drone detection and classification using these
Index Terms—Signal detection and classification, sensor technologies is presented in [2]. Researchers also proposed
systems and applications, UAV. to integrate multiple technologies [3] for the detection and
classification of UAVs.
I. I NTRODUCTION The radar detection exploits the back-scattered RF sig-
nal to detect and classify a drone. The conventional radar
HERE has been a tremendous technological improvement
T in the drone industry. Drones are now being equipped
with state-of-the-art (SoA) technologies and sensors such as
systems will fail to detect a mini-drone due to its small radar
cross section (RCS). To overcome this problem, researchers
utilized the micro-Doppler signature of a Quadcopter or a
GPS, LIDAR, radar and visual sensors. These technologies Multi-rotor UAV to detect and classify it using a multi-
facilitate drones to support numerous applications like cine- static radar [4] or a Frequency Modulated Continuous Wave
matography, farming, surveillance and recreational activities. (FMCW) radar [5], [6]. A complete review of the detection
Drones equipped with advanced technologies have great poten- and classification strengths of the current SoA FMCW radars
tial for damaged infrastructure inspections, urgent aid supply, is presented in [6].
search and rescue operations to remote and unreachable places. The video/image detection includes both visual and thermal
Apart from these beneficial applications, drones are also being detection, and in [7]–[9] researchers proposed several drone
used for illegal activities which impose risks to public safety. detection methods using this technology. With this technique,
The illegal activities include but are not limited to violation of drone detection is performed by analyzing its color, shape
and edge information [7]. The detection method is reliable,
Manuscript received February 1, 2021; revised May 5, 2021 and June 28,
2021; accepted July 13, 2021. Date of publication July 26, 2021; date of however, it requires a line of sight (LOS) between the drone
current version March 8, 2022. This work was supported by the Belgian and camera and the performance is highly dependent on day-
Ministry of Defence. The associate editor coordinating the review of this arti- light conditions and weather conditions like dust, rain, fog
cle and approving it for publication was H. T. Dinh. (Corresponding author:
Sanjoy Basak.) and cloud. Furthermore, the resemblance of a bird to a drone
Sanjoy Basak is with the Department CISS, Royal Military Academy, makes it more challenging for a video detector. In [8], the
1000 Bruxelles, Belgium, and also with the Department ESAT, KU Leuven, authors utilized the motion and trajectory information of a
3001 Leuven, Belgium (e-mail: [email protected]).
Bart Scheers is with the Department CISS, Royal Military Academy, drone to differentiate it from a bird. A brief overview of the
1000 Bruxelles, Belgium (e-mail: [email protected]). frameworks capable of differentiating a drone from a bird is
Sreeraj Rajendran and Sofie Pollin are with the Department ESAT, KU presented in [10].
Leuven, 3001 Leuven, Belgium (e-mail: [email protected];
[email protected]). The acoustic detection system utilizes the sound generated
Digital Object Identifier 10.1109/TCCN.2021.3099114 by flying drones to detect its presence using microphones.
2332-7731 
c 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
112 IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, VOL. 8, NO. 1, MARCH 2022

In [11], the authors proposed a framework using Hidden two complete solutions for drone signal detection and clas-
Markov Models (HMM) to perform phoneme analysis and sification, and provide an in-depth performance comparison.
identify a flying drone from its emitted sounds. Furthermore, The detection and classification is performed in two different
the detection and tracking of a drone using an array of ways: (i) Two-stage detection and classification process: where
antennas are also proposed in the literature. A small tetra- the signal detection is initially performed using an efficient
hedral array [12] or a microphone array consisting of 120 spectrum sensing method. This will detect all of the signals
elements [13] are used for drone detection and tracking. The present in the spectrum. The detected signals are passed to a
acoustic detection generally works well in a quiet or less SoA classifier to provide a robust classification. (ii) Combined
noisy environment, however, the performance deteriorates if detection and classification: where the signals are detected and
the environment is noisy such as urban or industrial areas or classified simultaneously. For both proposed methods, we per-
near seashores. form detection and classification with the received signal from
One of the most promising approaches to detect the pres- a single receiver and by using frequency domain fingerprints.
ence of a drone is through RF sensing. The commercial drones The advantages are (i) The possibility of using the received
perform RF communication with their ground control station signal from a single receiver eliminates the requirement of
(GCS) for flight control and navigation, live video trans- calibration of multiple receivers. This makes both methods
mission and transfer telemetry information. The autonomous easily deployable with a low cost SDR and a computational
drones also perform active RF communication to transfer unit. (ii) The frequency domain detection and classification
live videos and telemetry messages. An RF drone detection provide the necessary information for a possible RF jammer.
system can detect a drone by monitoring the communication Both methods can perform fast detection and classification,
frequency spectrum. There are a few RF-based drone detec- even in the presence of overlapping signals both in the time
tion techniques proposed in the literature [14]–[19]. In [14], and frequency domain. They are more generalized and robust
the presence of a drone is detected by monitoring how fre- as they understand the position and type of signal.
quently the data packet is being transmitted at 2.4 GHz. Since The main contributions of this paper are the following.
most of the drones use different non-standardized protocols 1) A novel and realistic multi-signal dataset is created using
for their communication with their controller, the data packet nine commercial drones and non-drone signals (i.e.,
transmission rate varies from WiFi and other Access Points WiFi communication signals). The dataset will be made
(AP) [14]. In [15], the detection is performed by measuring public for future research.
the data packet length of a drone’s communication link. These 2) The YOLO-lite architecture is recreated from scratch
detection methods are inefficient since the detector can be and modified to perform the combined drone signal
easily spoofed by an application communicating with an AP detection and classification. The two-stage detection and
with the same packet transfer rate or having the same packet classification is performed using GoF spectrum sensing
length as a drone. In [16], a WiFi based drone surveillance and DRNN classifier.
method is proposed, where the identification is performed 3) The simultaneous multi-signal detection, spectrum local-
by a WiFi statistical fingerprinting technique. In [20], we ization and classification in the ISM band is presented
observed that several commercial drone’s GCS uses Frequency in this paper. We are the first to propose a frame-
Hopping Spread Spectrum (FHSS) transmission as the radio work for simultaneous multi-signal drone detection and
control (RC) signal, which should also be accounted for in the classification.
identification method. 4) The detection and classification performance of both
The detection and identification of a drone using the frameworks are evaluated on our dataset. Through
frequency signatures is presented in [17], [18], using Deep detailed comparisons, we show that the YOLO-lite
Neural Network (DNN) based classifiers. In [17], the author framework provides better detection performance com-
developed a dataset using three commercial drones and used a pared to the GoF sensing and a good classification
simple feedforward DNN to detect and identify them. In [18], performance comparable to the DRNN classifier.
the author presented the detection, identification and classifica- The rest of the paper is organized as follows: A math-
tion on the same dataset using a Convolutional Neural Network ematical model of the received signal in an ISM band is
(CNN). These studies were performed on a limited dataset presented in Section II. Section III provides an overview of
and the impact of noise on the detection performance was not the SoA techniques for two-stage and combined signals detec-
studied. Moreover, the detection performance in presence of tion and classification. The technical details to perform two
multiple signals or interference was not investigated. stage, and combined detection and classification is presented
We presented an RF-based drone detection using GoF spec- in Section IV. The dataset development and the experiment
trum sensing and DoA estimation using MUSIC algorithm strategies are explained in Section V. The performance analy-
in [21]. Drone signal detection using wideband CFAR-based sis is presented in Section VI and the concluding remarks are
energy detection and the feature extraction performance is provided in Section VII.
presented in [20]. In [22], we presented drone signal clas-
sification using a DRNN framework. The classification was
performed assuming a signal is already detected by a spectrum II. P ROBLEM S TATEMENT
sensing algorithm. A complete solution for drone detection and The ISM bands are generally populated by several homoge-
classification based on RF fingerprints was not presented in our neous and heterogeneous RF transmissions. The transmitters
previous works, which we address in this paper. We propose generally use spread spectrum technology to perform the
Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
BASAK et al.: COMBINED RF-BASED DRONE DETECTION AND CLASSIFICATION 113

10

50 parametric methods require prior knowledge of the trans-


100 Tello
0
mitted signal for the detection, whereas the non-parametric
150
-10
methods also known as blind sensing do not require any
200
knowledge of the signal. Most popular spectrum sensing meth-
FFT frame nbr

250 WiFi
Wltoys
Parrot -20
ods are energy detection, eigenvalue detection, matched filter
300

350 -30
and cyclostationary feature detection [23]. The energy detec-
400
tion and eigenvalue detection are non-parametric methods. The
Tello
450
-40
energy-based detection have been widely used for decades
500
2390 2400 2410 2420 2430 2440 2450 2460 2470 2480
mainly due to its simplicity. The signal is detected if the
Frequency [MHz]
measured energy is greater than the threshold correspond-
Fig. 1. Presence of multiple signals at 2.4 GHz ISM band. ing to the specified false alarm rate. The eigenvalue detection
method estimates the ratio of maximum and minimum eigen-
communications. The FHSS transmissions are generally blind value and compares it with the threshold to determine if any
whereas the Direct Sequence Spread Spectrum (DSSS) trans- signal is present. The cyclostationary and matched filtering
missions are often cognitive in nature. Most commercial are parametric methods, they require perfect knowledge of the
drones use DSSS signal for video signal transmission. Unlike transmitted signal and can work better at lower signal-to-noise
FHSS transmissions, they perform sensing to find a free (or ratio (SNR) compared to the energy-based detection [24]. The
relatively free) channel before starting to transfer video signal. cyclostationary spectrum sensing method exploits the periodic-
One example of such heterogeneous transmission at 2.4 GHz ity introduced in the transmitted signal. The matched filtering
is shown in Fig. 1. As it can be seen from the figure, four method detects a signal by correlating a known template
transmissions are occurring at the same time where three trans- (extracted from the transmit signal) with the received signal.
mitters use the DSSS technology and one transmitter uses the Both methods are targeted towards specific (or known) signals
FHSS technology. and require a high computational cost.
The received signal from a drone can be expressed as: For the drone signal detection within two-stage detection
and classification, we choose wideband GoF based blind spec-
k
=K trum sensing algorithm [25]. The GoF sensing can provide
r (t) = yk (t) ∗ hk (t) + n(t), (1) better performance compared to the conventional energy detec-
k =1 tion using fewer samples of the received signal, at low SNR
where, yk (t) is complex baseband transmitted signal, hk (t) conditions and in presence of non-Gaussian noise [26]. The
is the time varying impulse response, k denotes the index of wideband GoF sensing uses DFT to divide the frequency band
transmitted signal, K is the total number of available transmit- into small frequency bins and perform narrowband GoF sens-
ters in the ISM band, n(t) is the Additive White Gaussian ing on each bin. In this paper, we have used Anderson-darling
Noise (AWGN). With the help of discrete Fourier trans- test statistic for the GoF sensing [25].
form (DFT), the complex time domain receive signal can
be converted into Mt consecutive segments, each with length B. Signal Classification
Nf (= FFT size). The magnitude of the DFT matrix gives us the DL methods have shown SoA performances in the classifi-
spectrogram matrix. This study aims to detect and classify drone cation of wireless signals and outperformed the conventional
and WiFi communication signals from the spectrogram matrix. classification methods. Some remarkable works have been
The spectrogram representation provides more information com- published [27], [28] in the past few years regarding the classi-
pared to a Power Spectral Density (PSD) or IQ representation of fication of modulated signals and device fingerprinting using
the signal. It enables the classifier to determine useful RF signal merely the raw received signals. In the recent years, CNN
features like frequency, bandwidth, dwell time and hop rate. frameworks have been widely investigated for wireless signal
Since commercial drones use pseudo-random number genera- recognition and classification problems [27], [29]. Among dif-
tors to generate the communication signals, the hopping pattern ferent variants of CNNs, the residual network-based CNN [30]
or the signal position will vary. The objective of our work is not to have shown great performance and outperformed other clas-
learn the hopping pattern, but rather to learn how the data/signal sifiers with equivalent network depth. In this paper, we adapt
is distributed in the spectrum to detect and classify them. Deep the DRNN proposed in [27] for the drone and WiFi signal
Learning (DL) algorithms learn the signal distribution which classification.
is dependent upon factors like: (i) frequency, (ii) bandwidth,
(iii) modulation, (iv) filtering parameters (v) device nonlineari-
C. Combined Detection and Classification
ties etc. We aim to utilize the DL algorithm to learn these factors
from the spectrogram matrix. The DNN based visual object detection and classification
techniques provide great tools such as YOLO [31] for the
combined RF signal detection, frequency localization and clas-
III. BACKGROUND
sification. Signal detection/classification from a spectrogram
A. Signal Detection image is analogous to the visual object detection and classifi-
The conventional spectrum sensing methods can be clas- cation. A spectrogram image provides the time and frequency
sified into parametric and non-parametric methods. The information of a spectrum instance, which can be utilized

Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
114 IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, VOL. 8, NO. 1, MARCH 2022

to perform the detection and classification. Since we are


interested in wideband signals, the localization and bounding
box will also enable to determine important features from the
detected signal, such as center frequency, bandwidth, dwell
time and hop interval. Such information can be used within a
cognitive radio to perform dynamic spectrum access function-
ality and avoid collisions in a spectrum sharing environment.
YOLO was first used in [32] to perform signal detection and
frequency localization. In [33], WiFi and LTE signal detection,
feature extraction and classification was performed using a pre-
trained YOLO framework. In this paper, we develop a YOLO
framework from scratch to perform the combined drone RF Fig. 2. Building blocks of the deep residual network architecture.
signal detection and classification on the spectrogram image.

IV. T ECHNICAL A PPROACH


A. Two Stage Detection and Classification
1) Signal Detection: The signal detection is performed
using the Anderson-Darling (AD) GoF test [25]. The complex
time-domain receive signal is converted into the frequency
domain using an N-point DFT operation on K consecutive Fig. 3. Signal recording schematic.
segments. This results in a sequence of Xk of length K for
every frequency bin. We perform a hypothesis test using an TABLE I
DRNN A RCHITECTURE
AD tester for each frequency bin to decide whether only noise
or a signal is present in the bin. We assume, there is only noise
present in the frequency bin if the normalized power spectral
2|X |2
coefficient ( N σk2 ) follows a χ2 distribution. The length of
DFT is N and σ 2 is the noise power. We estimated the noise
power by exploiting the histogram of the power spectrum [34].
The AD test statistic (A2n ) is calculated for each frequency
bin as:
n   
i=1 (2i − 1) ln zi + ln 1 − z(n+1−i)
A2n = −n − (2)
n 1. For each FC layer, we have employed a scaled exponen-
tial linear unit (SeLU) activation and mean response scaled
with zi = Fo (xi ).
initialization [35]. To prevent overfitting, we have performed
Here, Fo represents the Cumulative Distribution Function
50% dropout after each FC layer. A softmax activation is
(CDF) of a chi-square distribution with 2 degrees of free-
used at the final layer to give the prediction probability. We
dom, x1 ≤ x2 ≤ · · · ≤ xn are the samples under test and
haven’t performed any batch normalization operation, since,
n represents the total number of samples.
we did not observe any additional improvement with it in the
If A2n > λ, we assume the signal is present in the frequency
classification performance.
bin, otherwise, we assume that the frequency bin only con-
If any signal is detected from the spectrum sensing algo-
tains noise. Here, λ corresponds to the detection threshold. A
rithm, the complete spectrum (i.e., the time domain RX signal)
detailed explanation of the AD GoF sensing, the derivation of
is passed to the classification stage. The time-domain signal
A2n and the procedure to perform the hypothesis test on com-
is converted to a spectrogram signal of size 256 x 256 and
plex received signals from a Software Defined Radio (SDR)
the classification is performed on the signal. Since we are
are provided in [25], [26]. The value of λ is determined con-
interested in comparing the classification performance with
sidering 5% False Alarm Rate (FAR). The suitable value for λ
the YOLO framework, we kept the input size the same for
was calculated through the AD GoF tests on the drone dataset.
both algorithms.
The value of λ = 3.89 provided us an approximate 5% FAR.
2) Signal Classification: The classification is performed
using an adaptation of the DRNN framework proposed in [27]. B. Combined Detection and Classification With YOLO
The architecture is depicted in Table I and the building blocks We implement one of the variants of the YOLO architec-
are shown in Fig. 2. The DRNN framework consists of N ture to perform the simultaneous detection and classification
residual stack units, two fully connected (FC) layers and a of drone and WiFi communication signals. One of the biggest
softmax layer. For all convolution operations, we have used strengths of the YOLO framework is that it can detect the
32 filters with a kernel size of 3x3, apart from the first layer signal, determine spectral features like frequency, bandwidth,
of the residual stack where the kernel size is 1x1. For the max dwell time and predict the class of the detected signal simul-
pooling, a kernel size of 2x2 is used with a stride factor of taneously. The raw spectral power values of the RF signal
Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
BASAK et al.: COMBINED RF-BASED DRONE DETECTION AND CLASSIFICATION 115

TABLE II
YOLO N ETWORK A RCHITECTURE

Fig. 4. Test setup in an anechoic chamber.

TABLE III
in time and frequency domain are used as the input. Since D RONES , R ADIO C ONTROLLERS AND W I F I S OURCES U SED FOR THE
DATASET D EVELOPMENT
recognizing such time-frequency domain spectral events are
relatively simpler than the visual object recognition [32], a
smaller network may be sufficient for the YOLO detection
and classification task. In our experiments, we have adapted
the YOLO-lite [36] architecture, which is a smaller and faster
network and can be deployed on a non-GPU computer. Our
adaptation of the YOLO-Lite architecture is shown in Table II.
We have used leaky-relu activation after each convolution
operation (C1 - C6) and linear activation on the C7 layer.
The max-pooling operation is performed after the convolu-
tions (C1 - C5). Finally, a fully connected layer is employed,
and sigmoid activation is performed.
A spectrogram dataset with a dimension of 256x256 is used Since a UAV controller generally uses a pseudorandom gen-
as the input. The network produces an output grid containing erator to generate the FHSS sequences as the RC signal, we
the detection probability, bounding box coordinates and class included all possible hop sequences from each controller in our
probabilities as shown following: database. The controllers were turned off and on several times
during data collection to inspect if the hop position changes
O/Pshape = SxSx(Bx[C, x, y, w, h] + P) (3) and include that as well in our database.
Here, S denotes the size of the grid. Each grid cell contains
B bounding boxes, the confidence score of the detection: c, B. Dataset Development
2D coordinates: x, y, width: w, height: h of the object and the To test the classification performance at lower SNRs, we
class probability P. We have used grid size 16, 2 bounding introduced AWGN to the signal in the simulation environment.
boxes and 10 different classes (Table III) for our tests. In the Generally, the SNR is calculated in the time domain by mea-
original YOLO-lite architecture output grid size of 8 was used, suring the transmission power of the signal. Since the signal
however, we found during our tests that in order to annotate bandwidth of different drones is different, it becomes diffi-
the DSSS spectrograms (e.g., Tello, Parrot), a higher number cult to calculate the SNR in the time domain. Therefore, we
of grid size is required. With the above specified parameters, calculated the signal SNR in the frequency domain as shown
the output dimension becomes 16 x 16 x 20. We have used in Fig. 5(a). Since the bandwidth and transmission power are
adam optimizer [37] to optimize the training loss. The training different for different drones, the calculated SNR was also
loss involves the minimization of the sum of mean squared slightly different for different noise values as presented in
error loss between the ground truth and network prediction. Fig. 5(b). To keep the performance analysis of different frame-
The complete training loss function provided in [31] is used works on the dataset simple, we considered the average SNR
for the training optimization. for the introduced AWGN values as shown in Fig. 5(c).
In [22], we evaluated the classification performance in
V. E XPERIMENTS Rician and Rayleigh fading simulation environment where
A. Experimental Setup the classifier was trained with AWGN faded dataset. We
did not observe any significant deviation in the classification
The drone and WiFi signals were recorded in an ane- performance due to the channel variation. Therefore, in this
choic chamber. For this experiment, we only considered the paper, we only evaluated the classification performance under
transmitters operating at 2.4 GHz. A universal software radio AWGN conditions.
peripheral (USRP) X310 was used with an omnidirectional
antenna. The receive sampling rate of 100 MSps was used
to receive instantaneously from the complete 2.4 GHz ISM C. Implementation Details
band. Nine commercial radio controllers with drones and two The GoF sensing was implemented in MATLAB. The
WiFi routers (Table III) were used to develop the dataset. DRNN framework was implemented using Tensorflow-Keras
The devices were placed seven meters apart from the receiver. and the YOLO-lite framework was implemented using TFlearn

Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
116 IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, VOL. 8, NO. 1, MARCH 2022

Fig. 6. Signal detection with YOLO-Lite.

A. Single Signal Detection and Classification


Signal detection and spectrum localization with the YOLO-
lite framework are presented in Fig. 6. As Fig. 6 shows, the
signals were detected, localized in the spectral domain and
classified by the framework accurately. The average probabil-
ity of detection (PD) of YOLO under different SNR conditions
is presented in Fig. 7(a). A detection from the YOLO-Lite
prediction is considered to be true if it satisfies the following
two conditions: (i) the confidence score of any bounding box is
greater than the specified threshold (i.e., C > 0.4) and (ii) the
Intersection over Union (IoU) is greater or equal to 0.50 (i.e.,
IoU ≥ 0.50). The PD for the GoF test is calculated by com-
paring the true frequency bins with the predicted frequency
bins from the AD test. As Fig. 7(a) displays, the PD from
YOLO is comparable with the GoF test. The detection proba-
bility increases for both frameworks as the SNR increases. The
YOLO PD saturates around 96%, where it reaches at around
−3 dB SNR. This saturation happens because YOLO often
Fig. 5. SNR variation using AWGN in the simulation environment.
could not detect all closely spaced hops of signals from Tello,
Parrot and WiFi. One example of such a spectrum is shown in
Fig. 6(c). The GoF test also provides around 96% PD around
in Python, both running on top of Tensorflow [38]. The sim- SNR −3 dB, however, it increases further and reaches 99.9%
ulations and the neural network training and testing were at SNR 3 dB.
performed on an Intel Core i7 computer equipped with Nvidia To evaluate the classification performances, we used the F1-
RTX 2080 GPU. The adam optimizer with a learning rate of score parameter, which is a harmonic mean of precision and
0.001 was used for the training optimization for both networks. recall. Since the F1-score takes both precision and recall into
The training was performed with a batch size of 32. account, it allows us to compare the performance of different
classifiers using just one metric. The performance of classifi-
cation with the YOLO and DRNN framework are plotted in
VI. P ERFORMANCE A NALYSIS Fig. 7(b). In order to compare the classification performance
The performance analysis was performed for two scenarios: of YOLO with the DRNN, we performed the classification
(i) detection and classification in presence of one signal at a with the DRNN framework independent of the GoF detection.
time under AWGN conditions, (ii) detection and classification At lower SNR, DRNN provides a better F1-score compared
in presence of multiple signals simultaneously with frequency to YOLO. This is expected since YOLO-lite is much shal-
overlapped and non-overlapped cases. lower framework compared to the DRNN framework. The

Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
BASAK et al.: COMBINED RF-BASED DRONE DETECTION AND CLASSIFICATION 117

Fig. 9. Simulation schematic for multi-signal detection and classification.

Fig. 7. Detection probability and classification performance under AWGN


conditions with YOLO and DRNN framework. Train samples: 4.9k, Test
samples: 2.1k.

Fig. 10. Detection and classification with YOLO in presence of multiple


signals.

performance compared to the other frameworks. The Tiny-


YOLOv2 provided slightly better classification performance
Fig. 8. Classification comparison with other frameworks under AWGN compared to the YOLO-Lite framework. This is expected since
conditions. Train samples: 4.9k, Test samples: 2.1k.
it is a slightly deeper framework compared to the YOLO-
lite framework. We also recreated Tiny-YOLOv2 from scratch
for this classification test. The DNN model provided lower
F1-scores increases with the increase in SNR for both frame- F1-score compared to other frameworks from SNR −10 dB
works. The F1-score reaches approx. 97% at SNR −3 dB for and onwards. The F1-score of this framework was saturated at
both frameworks. The F1-score from the YOLO-lite saturates around 89%, whereas YOLO-lite, Tiny-YOLOv2 and DRNN
around 97% at higher SNRs. The F1-score from the DRNN provided around 97%, 98% and 99% F1-score respectively at
framework increases to 99% at SNR 3 dB. higher SNRs.
The classification performance of our YOLO-Lite and
DRNN framework is compared with other existing frameworks
namely the DNN framework proposed in [17] and the Tiny- B. Simultaneous Multi-Signal Detection and Classification
YOLOv2-VOC framework [36]. The F1-scores of the classi- In order to test the detection and classification performance
fication performance under AWGN conditions are plotted in in a simultaneous multi-signal scenario, the signals were added
Fig. 8. The DRNN framework provided the best classification in the simulation environment as shown in Fig. 9. To ensure

Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
118 IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, VOL. 8, NO. 1, MARCH 2022

Fig. 12. Detection probability under AWGN conditions in presence of


multiple-signals simultaneously. Train samples: 30.7k, Test samples: 13.2k.

Fig. 11. Signal detection using GoF spectrum sensing in presence of multiple
signals.

the number of specified signals present in the spectrogram,


we performed spectrum sensing before adding the RX signals.
Each signal burst was a vector of 256*256 = 65536 complex
samples in the time domain. Later AWGN was introduced
after adding the signals and it was converted to the frequency
domain. The detection and classification from YOLO-lite is
shown in Fig. 10. In order to observe the prediction accuracy,
the ground truth and prediction are plotted side by side and the
Fig. 13. Classification performance under AWGN conditions in presence of
true and predicted classes are also annotated in white color. As multiple-signals simultaneously. Train samples: 30.7k, Test samples: 13.2k.
it can be seen from Fig. 10, the signals were detected, localized
in spectral domain and classified accurately. YOLO-lite could
also detect and classify signals which are overlapped in the
frequency domain. It can be observed from Fig. 10(b), that The classification performance of DRNN and YOLO over
WiFi and tello signals are overlapped in frequency domain. different SNRs are plotted in Fig. 13. Similar to the PD,
Similarly, DX6i and Parrot signals are also overlapped in the F1-score of the YOLO classification increases with the
the frequency domain. All of the signals are localized and increase in the number of sources. We can also observe
classified accurately. the same phenomena with the F1-score of the DRNN clas-
The multi-signal detection from the GoF sensing is shown sification. After detailed investigation, we found that the
in Fig. 11. The spectrogram image is plotted in Fig. 11 (top) classification accuracy of any individual signal remains the
and the AD GoF result is plotted in Fig. 11 (bottom). As it same for single signal or simultaneous multi-signal scenario.
can be seen from the figure, that all seven signals are detected At lower SNRs, the classification accuracy for different sig-
correctly. nals are generally different. There are two reasons behind this:
The detection threshold for YOLO-lite was chosen to be 0.4. (i) the actual SNR of the signals are not the same (Fig. 5(b))
This threshold was chosen such that the maximum FA rate for (ii) classifiers can classify some signals better than other sig-
the lowest SNR remains below 5%. The threshold was kept nals at lower SNRs. Fig. 5(b) shows that the actual SNR
the same for single and simultaneous multi-signal detection. of the signals is different for different classes. As it can be
The PD of the GoF sensing and YOLO-lite over differ- observed, one of the signals has a very low SNR compared to
ent SNRs are plotted in Fig. 12. As it can be seen from the other signals. This is because we added a constant AWGN
the figure, YOLO-lite showed better detection performance noise and considered the average SNR for ease of analysis.
compared to the GoF sensing. For GoF sensing, the detec- When we calculate the average F1-score for the single-signal
tion performance over different SNRs did not vary depending scenario, it averages equally having independent F1 scores for
on the number of sources. On the contrary, an increase in the each class. However, for the multi-signal scenario, as the num-
detection performance was observed with YOLO-lite. If we ber of signals increases, the relatively lower F1 score of a
look at Fig. 5(b), the SNRs for different signals are different. particular class cannot make the average performance as worse
With wideband GoF sensing, if we have multiple signals with as the single-signal scenario. Therefore, as the number of sig-
different SNR present in the spectrum, it impacts the noise nals in the multi-signal scenario increases the average score
floor estimation. This may result in an overestimation of the also goes higher. Again, similar to the single drone scenario,
threshold. This issue was not observed with the YOLO-lite DRNN showed better classification performance compared
spectrum sensing. to YOLO.

Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
BASAK et al.: COMBINED RF-BASED DRONE DETECTION AND CLASSIFICATION 119

TABLE IV
C. Comparison Summary C OMPLEXITY C OMPARISON
The comparisons are summarized here.
• Signal Detection: We have obtained better signal detec-
tion performance with YOLO compared to the GoF
spectrum sensing on our dataset. At lower SNR regions,
the GoF sensing showed high false alarm rate compared
to the YOLO detection. Since YOLO is a supervised drones. There are two possible outcomes in such a case:
detection framework, the performance may deviate while (i) classifier will label it as an existing drone signal if the
detecting unknown signals. This issue can be resolved TX signal has a similar frequency fingerprint, (ii) classi-
with transfer learning using a small labeled dataset of fier will be confused and provide a very low classification
the new signal. On the contrary, the GoF sensing is a score for all classes. Similarly, some specific models of
blind spectrum sensing method, it is able to detect any UAV controllers may use a completely different hop-
signal present in the spectrum. ping sequence compared to another controller under the
• Signal Classification: The DRNN framework provided
same model. The YOLO detection and classification
better classification performance compared to the YOLO performance for such cases are not yet tested. We are
framework. It is expected from a deep residual network going to investigate and address these issues in our future
since it utilizes the skip connection feature in the archi- work.
tecture and the network is deeper than the YOLO
framework.
• Signal Localization and Feature Extraction: The signal VII. C ONCLUSION
localization and feature extraction is the best feature of In this paper, we performed drone signal detection, spectrum
the YOLO framework. The localization feature of YOLO localization and classification using two stages and combined
enables us to detect multiple signals simultaneously and detection and classification methods. Under the two-stage
extract the useful features from the received signal. It technique, we used the GoF sensing for the detection and
can provide the center frequency, bandwidth, hop rate the DRNN framework for the classification. The YOLO-lite
and dwell time of the detected signal. This information is framework was recreated from scratch to perform the com-
required for an RF jammer to perform soft neutralization bined drone RF signal detection, spectrum localization and
of a drone. With the two-stage detection and classifica- classification. A detailed performance comparison between
tion framework, it is not possible to extract all of these both of the techniques is presented using a novel drone dataset
features. The DRNN framework does not provide the that was prepared for this study. We obtained good detection
spectral position of the signal under classification. The and classification performances with both techniques. Since
GoF sensing provides the frequency and bandwidth of the classification is performed in a supervised manner, the
the signal, however, in presence of multiple signals, it is performance may deviate in presence of unknown or newer
going to be difficult to associate the signal features with drone signals which we mentioned in detail in the limitation
the classification label. discussion. In the future work, we are going to investigate the
• Complexity Analysis: A complexity analysis was per- unsupervised scenarios, since we are interested in developing a
formed to give an overview of the computational com- robust framework that can detect and classify all drone signals
plexity and the inference time required for each frame- irrespective of the dataset it is trained with.
work. The total network parameter (that is, trainable +
non trainable parameter), the mean inference time on total R EFERENCES
test samples and the mean prediction time per sample
[1] D. Sathyamoorthy, “A review of security threats of unmanned aerial
are presented in Table IV. The inference time calculation vehicles and mitigation steps,” J. Defence Security, vol. 6, no. 1,
was performed with the dataset used in the single sig- pp. 81–97, 2015.
nal detection scenario, where the test sample size was [2] B. Taha and A. Shoufan, “Machine learning-based drone detection
and classification: State-of-the-art in research,” IEEE Access, vol. 7,
2.1k samples. We used a batch size of 100 and per- pp. 138669–138682, 2019.
formed the test on Nvidia GeForce RTX 2080 Ti GPU. [3] G. Ding, Q. Wu, L. Zhang, Y. Lin, T. A. Tsiftsis, and Y.-D. Yao, “An
As it can be seen from Table IV, YOLO-lite is approx- amateur drone surveillance system based on the cognitive Internet of
Things,” IEEE Commun. Mag., vol. 56, no. 1, pp. 29–35, Jan. 2018.
imately 3.4 times faster than the DRNN framework. [4] F. Fioranelli, M. Ritchie, H. Griffiths, and H. Borrion, “Classification
The GoF spectrum sensing is a computationally simpler of loaded/unloaded micro-drones using multistatic radar,” Electron.
Lett., vol. 51, no. 22, pp. 1813–1815, 2015. [Online]. Available:
algorithm compared to the DRNN and YOLO-lite frame- https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/el.2015.3038
works. It does not require any high-end computational [5] J. Drozdowicz et al., “35 GHz FMCW drone detection system,” in Proc.
unit. During the SafeShore [39] project, we implemented 17th Int. Radar Symp. (IRS), 2016, pp. 1–4.
[6] A. Coluccia, G. Parisi, and A. Fascista, “Detection and classification of
the GoF sensing using C++ on an Odroid-XU4 platform multirotor drones in radar sensor networks: A review,” Sensors, vol. 20,
and performed real-time tests. no. 15, p. 4172, 2020. [Online]. Available: https://www.mdpi.com/1424-
• Limitation: One of the limitations of both proposed meth- 8220/20/15/4172
[7] Z. Zhang, Y. Cao, M. Ding, L. Zhuang, and W. Yao, “An intruder detec-
ods is the classification of completely unknown signals. tion algorithm for vision based sense and avoid system,” in Proc. Int.
Since the classification is performed in a supervised Conf. Unmanned Aircr. Syst. (ICUAS), 2016, pp. 550–556.
manner, the classifier may not be able to classify or [8] S. R. Ganti and Y. Kim, “Implementation of detection and tracking
mechanism for small uas,” in Proc. Int. Conf. Unmanned Aircr. Syst.
provide a label to the transmitted signals from newer (ICUAS), 2016, pp. 1254–1260.
Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
120 IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, VOL. 8, NO. 1, MARCH 2022

[9] R. Stolkin, D. Rees, M. Talha, and I. Florescu, “Bayesian fusion of [32] T. O’Shea, T. Roy, and T. C. Clancy, “Learning robust general radio
thermal and visible spectra camera data for mean shift tracking with signal detection using computer vision methods,” in Proc. 51st Asilomar
rapid background adaptation,” in Proc. IEEE SENSORS, 2012, pp. 1–4. Conf. Signals Syst. Comput., 2017, pp. 829–832.
[10] A. Coluccia et al., “Drone-vs-bird detection challenge at IEEE [33] E. Fonseca, J. F. Santos, F. Paisana, and L. A. DaSilva, “Radio
AVSS2019,” in Proc. 16th IEEE Int. Conf. Adv. Video Signal Based access technology characterisation through object detection,” Comput.
Surveillance (AVSS), 2019, pp. 1–7. Commun., vol. 168, pp. 12–19, Feb. 2021. [Online]. Available:
[11] M. Nijim and N. Mantrawadi, “Drone classification and identification https://www.sciencedirect.com/science/article/pii/S0140366420320272
system by phenome analysis using data mining techniques,” in Proc. [34] S. Couturier and D. Rauschen, “Energy detection based on long-term
IEEE Symp. Technol. Homeland Security (HST), 2016, pp. 1–5. estimation of Gaussian noise distribution,” in Proc. 8th Karlsruhe
[12] M. Benyamin and G. H. Goldman, Acoustic Detection and Tracking of Workshop Softw. Radios, 2014, pp. 89–95.
a Class I UAS with a Small Tetrahedral Microphone Array, Army Res. [35] S. Ren, K. He, X. Zhang, and J. Sun, “Delving deep into rectifiers:
Lab., Adelphi, MD, USA, Sep. 2014. Surpassing human-level performance on ImageNet classification,” in
Proc. ICCV, 2015, pp. 1026–1034.
[13] J. Busset et al., “Detection and tracking of drones using advanced
[36] R. Huang, J. Pedoeem, and C. Chen, “YOLO-LITE: A real-time object
acoustic cameras,” in Proc. Unmanned/Unattended Sens. Sens. Netw.
detection algorithm optimized for non-GPU computers,” in Proc. IEEE
XI Adv. Free-Space Opt. Commun. Techn. Appl., vol. 9647, Oct. 2015,
Int. Conf. Big Data (Big Data), 2018, pp. 2503–2510.
Art. no. 96470F.
[37] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
[14] P. Nguyen, M. Ravindranatha, A. Nguyen, R. Han, and T. Vu, 2015. [Online]. Available: arXiv:1412.6980.
“Investigating cost-effective RF-based detection of drones,” in Proc. [38] M. Abadi et al. (2015). TensorFlow: Large-Scale Machine Learning on
2nd Workshop Micro Aerial Veh. Netw. Syst. Appl. Civilian Use, Heterogeneous Systems. [Online]. Available: https://www.tensorflow.org/
2016, pp. 17–22. [Online]. Available: https://doi.org/10.1145/2935620. [39] European Commission. Horizon2020. The SafeShore Project. [Online].
2935632 Available: http://safeshore.eu
[15] P. Kosolyudhthasarn, V. Visoottiviseth, D. Fall, and S. Kashihara, “Drone
detection and identification by using packet length signature,” in Proc. Sanjoy Basak received the M.Sc. degree in elec-
15th Int. Joint Conf. Comput. Sci. Softw. Eng. (JCSSE), 2018, pp. 1–6. trical engineering and information technology from
[16] I. Bisio, C. Garibotto, F. Lavagetto, A. Sciarrone, and S. Zappatore, the Karlsruhe Institute of Technology, Karlsruhe,
“Blind detection: Advanced techniques for WiFi-based drone surveil- Germany, in 2016. He is currently pursuing the joint
lance,” IEEE Trans. Veh. Technol., vol. 68, no. 1, pp. 938–946, Jan. 2019. Doctoral degree with the Royal Military Academy
[17] M. F. Al-Sa’d, A. Al-Ali, A. Mohamed, T. Khattab, and A. Erbad, “RF- and the Department of Electrical Engineering, KU
based drone detection and identification using deep learning approaches: Leuven. He joined the Royal Military Academy,
An initiative towards a large open source drone database,” Future Gener. Belgium, as a Researcher in 2016. His research
Comput. Syst., vol. 100, Nov. 2019, pp. 86–97. interests include deep learning algorithms for wire-
[18] S. Al-Emadi and F. Al-Senaid, “Drone detection approach based on less signal detection and classification.
radio-frequency using convolutional neural network,” in Proc. IEEE Int.
Conf. Inform. IoT Enabling Technol. (ICIoT), 2020, pp. 29–34. Sreeraj Rajendran received the master’s degree
[19] M. M. Azari, H. Sallouha, A. Chiumento, S. Rajendran, E. Vinogradov, in communication and signal processing from the
and S. Pollin, “Key technologies and system trade-offs for detection and Indian Institute of Technology Bombay, India, in
localization of amateur drones,” IEEE Commun. Mag., vol. 56, no. 1, 2013, and the Ph.D. degree from KU Leuven,
pp. 51–57, Jan. 2018. Belgium, in 2019, where he is a Postdoctoral
[20] P. Stoica, S. Basak, C. Molder, and B. Scheers, “Review of counter-uav Researcher with the Networked Systems Group.
solutions based on the detection of remote control communication,” in In 2013, he was a Senior Design Engineer with
Proc. 13th Int. Conf. Commun. (COMM), 2020, pp. 233–238. the Baseband Team, Cadence (Tensilica). He was
[21] S. Basak and B. Scheers, “Passive radio system for real-time drone also an ASIC Verification Engineer with Wipro
detection and DoA estimation,” in Proc. Int. Conf. Military Commun. Technologies from 2007 to 2010. His main research
Inf. Syst. (ICMCIS), May 2018, pp. 1–6. interests include machine learning algorithms for
[22] S. Basak, S. Rajendran, S. Pollin, and B. Scheers, “Drone classifica- wireless spectrum awareness and low power wireless sensor networks.
tion from RF fingerprints using deep residual nets,” in Proc. Int. Conf. Sofie Pollin (Senior Member, IEEE) received the
Commun. Syst. Netw. (COMSNETS), 2021, pp. 548–555. Ph.D. degree (Hons.) from KU Leuven, Leuven,
[23] H. Sun, A. Nallanathan, C.-X. Wang, and Y. Chen, “Wideband spec- Belgium, in 2006. From 2006 to 2008, she con-
trum sensing for cognitive radio networks: A survey,” IEEE Wireless tinued her research on wireless communication,
Commun., vol. 20, no. 2, pp. 74–81, Apr. 2013. energy-efficient networks, cross-layer design, coex-
[24] I. Kakalou, D. Papadopoulou, T. Xifilidis, K. E. Psannis, K. Siakavara, istence, and cognitive radio with UC Berkeley. In
and Y. Ishibashi, “A survey on spectrum sensing algorithms for cognitive November 2008, she returned to IMEC, Leuven,
radio networks,” in Proc. 7th Int. Conf. Modern Circuits Syst. Technol. where she become a Principal Scientist with the
(MOCAST), 2018, pp. 1–4. Green Radio Team. Since 2012, she has been a
[25] B. Scheers, D. Teguig, and V. Le Nir, “Wideband spectrum sensing tenure track Assistant Professor with the Department
technique based on goodness-of-fit testing,” in Proc. Int. Conf. Military of Electrical Engineering, KU Leuven. Her research
Commun. Inf. Syst. (ICMCIS), 2015, pp. 1–6. centers around networked systems that require networks that are ever more
[26] D. Teguig, B. Scheers, V. Le Nir, and F. Horlin, “Spectrum sensing dense, heterogeneous, battery-powered, and spectrum constrained. She is a
method based on the likelihood ratio goodness of fit test under noise fellow of the BAEF and Marie Curie.
uncertainty,” Int. J. Eng. Res. Technol., vol. 3, no. 9, pp. 488–494, 2014. Bart Scheers was born in Rumst, Belgium, in
[27] T. J. O’Shea, T. Roy, and T. C. Clancy, “Over-the-air deep learning November 1966. He received the M.S. degree in
based radio signal classification,” IEEE J. Sel. Topics Signal Process., engineering, with a specialization in communica-
vol. 12, no. 1, pp. 168–179, Feb. 2018. tion from the Royal Military Academy, Brussels,
[28] S. Rajendran, W. Meert, D. Giustiniano, V. Lenders, and S. Pollin, “Deep Belgium, in 1991, and the joint Ph.D. degree from
learning models for wireless signal classification with distributed low- the Université Catholique de Louvain, Ottignies-
cost spectrum sensors,” IEEE Trans. Cogn. Commun. Netw., vol. 4, no. 3, Louvain-la-Neuve, Belgium, and Royal Military
pp. 433–445, Sep. 2018. Academy, in 2001, where he presented his Ph.D.
[29] S. Riyaz, K. Sankhe, S. Ioannidis, and K. Chowdhury, “Deep learning dissertation on the use of ground penetrating radars
convolutional neural networks for radio identification,” IEEE Commun. in the field of humanitarian demining. He was an
Mag., vol. 56, no. 9, pp. 146–152, Sep. 2018. Officer with the Territorial Signal Unit, Belgian
[30] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for Army. In 1994, he was an Assistant of Signal Processing with Royal Military
image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Academy, where he has been a Military Professor with the Communications
(CVPR), 2016, pp. 770–778. Information Systems and Sensors Department, since 2003, and is also the
[31] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look Director of the Research Unit on Radio Networks. His current domains of
once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. interest are mobile ad hoc networks (layers two and three), cognitive radio,
Vis. Pattern Recognit. (CVPR), 2016, pp. 779–788. and Internet of Things.

Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.

You might also like