251 - Combined - RF-Based - Drone - Detection - and - Classification
251 - Combined - RF-Based - Drone - Detection - and - Classification
251 - Combined - RF-Based - Drone - Detection - and - Classification
Abstract—Despite several beneficial applications, unfortu- public privacy, drug trafficking, firearm smuggling, bombing,
nately, drones are also being used for illicit activities such as drug and invading security-sensitive places like airports and nuclear
trafficking, firearm smuggling or to impose threats to security- power plants.
sensitive places like airports and nuclear power plants. The
existing drone localization and neutralization technologies work Several Counter Unmanned Aircraft Systems (C-UAS) have
on the assumption that the drone has already been detected and been proposed to disable the attack from a drone, which are
classified. Although we have observed a tremendous advance- mainly divided into two categories: hard and soft interception
ment in the sensor industry in this decade, there is no robust (kinetic or non-kinetic solution). The kinetic solutions include
drone detection and classification method proposed in the litera- intercepting a drone using (i) a trained bird of prey (ii) a net
ture yet. This paper focuses on radio frequency (RF) based drone
detection and classification using the frequency signature of the gun [1] (iii) a laser beam, and (iv) a firearm. The non-kinetic
transmitted signal. We have created a novel drone RF dataset solutions include: (i) GPS spoofing [1] to deceive a drone’s
using commercial drones and presented a detailed comparison localization system and (ii) RF jamming. Irrespective of the
between a two-stage and combined detection and classifica- chosen solution for any environment, the presence of a drone
tion framework. The detection and classification performance should be detected and classified beforehand.
of both frameworks are presented for a single-signal and simul-
taneous multi-signal scenario. With detailed analysis, we show Detecting and classifying a drone automatically is a chal-
that You Only Look Once (YOLO) framework provides better lenging task. Some popular technological approaches to detect
detection performance compared to the Goodness-of-Fit (GoF) and classify a drone include: (i) Radar detection, (ii) Video
spectrum sensing for a simultaneous multi-signal scenario and detection, (iii) Acoustic detection, and (iv) RF-based detection.
good classification performance comparable to Deep Residual A comprehensive literature review on the current SoA Machine
Neural Network (DRNN) framework.
Learning-based drone detection and classification using these
Index Terms—Signal detection and classification, sensor technologies is presented in [2]. Researchers also proposed
systems and applications, UAV. to integrate multiple technologies [3] for the detection and
classification of UAVs.
I. I NTRODUCTION The radar detection exploits the back-scattered RF sig-
nal to detect and classify a drone. The conventional radar
HERE has been a tremendous technological improvement
T in the drone industry. Drones are now being equipped
with state-of-the-art (SoA) technologies and sensors such as
systems will fail to detect a mini-drone due to its small radar
cross section (RCS). To overcome this problem, researchers
utilized the micro-Doppler signature of a Quadcopter or a
GPS, LIDAR, radar and visual sensors. These technologies Multi-rotor UAV to detect and classify it using a multi-
facilitate drones to support numerous applications like cine- static radar [4] or a Frequency Modulated Continuous Wave
matography, farming, surveillance and recreational activities. (FMCW) radar [5], [6]. A complete review of the detection
Drones equipped with advanced technologies have great poten- and classification strengths of the current SoA FMCW radars
tial for damaged infrastructure inspections, urgent aid supply, is presented in [6].
search and rescue operations to remote and unreachable places. The video/image detection includes both visual and thermal
Apart from these beneficial applications, drones are also being detection, and in [7]–[9] researchers proposed several drone
used for illegal activities which impose risks to public safety. detection methods using this technology. With this technique,
The illegal activities include but are not limited to violation of drone detection is performed by analyzing its color, shape
and edge information [7]. The detection method is reliable,
Manuscript received February 1, 2021; revised May 5, 2021 and June 28,
2021; accepted July 13, 2021. Date of publication July 26, 2021; date of however, it requires a line of sight (LOS) between the drone
current version March 8, 2022. This work was supported by the Belgian and camera and the performance is highly dependent on day-
Ministry of Defence. The associate editor coordinating the review of this arti- light conditions and weather conditions like dust, rain, fog
cle and approving it for publication was H. T. Dinh. (Corresponding author:
Sanjoy Basak.) and cloud. Furthermore, the resemblance of a bird to a drone
Sanjoy Basak is with the Department CISS, Royal Military Academy, makes it more challenging for a video detector. In [8], the
1000 Bruxelles, Belgium, and also with the Department ESAT, KU Leuven, authors utilized the motion and trajectory information of a
3001 Leuven, Belgium (e-mail: [email protected]).
Bart Scheers is with the Department CISS, Royal Military Academy, drone to differentiate it from a bird. A brief overview of the
1000 Bruxelles, Belgium (e-mail: [email protected]). frameworks capable of differentiating a drone from a bird is
Sreeraj Rajendran and Sofie Pollin are with the Department ESAT, KU presented in [10].
Leuven, 3001 Leuven, Belgium (e-mail: [email protected];
[email protected]). The acoustic detection system utilizes the sound generated
Digital Object Identifier 10.1109/TCCN.2021.3099114 by flying drones to detect its presence using microphones.
2332-7731
c 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
112 IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, VOL. 8, NO. 1, MARCH 2022
In [11], the authors proposed a framework using Hidden two complete solutions for drone signal detection and clas-
Markov Models (HMM) to perform phoneme analysis and sification, and provide an in-depth performance comparison.
identify a flying drone from its emitted sounds. Furthermore, The detection and classification is performed in two different
the detection and tracking of a drone using an array of ways: (i) Two-stage detection and classification process: where
antennas are also proposed in the literature. A small tetra- the signal detection is initially performed using an efficient
hedral array [12] or a microphone array consisting of 120 spectrum sensing method. This will detect all of the signals
elements [13] are used for drone detection and tracking. The present in the spectrum. The detected signals are passed to a
acoustic detection generally works well in a quiet or less SoA classifier to provide a robust classification. (ii) Combined
noisy environment, however, the performance deteriorates if detection and classification: where the signals are detected and
the environment is noisy such as urban or industrial areas or classified simultaneously. For both proposed methods, we per-
near seashores. form detection and classification with the received signal from
One of the most promising approaches to detect the pres- a single receiver and by using frequency domain fingerprints.
ence of a drone is through RF sensing. The commercial drones The advantages are (i) The possibility of using the received
perform RF communication with their ground control station signal from a single receiver eliminates the requirement of
(GCS) for flight control and navigation, live video trans- calibration of multiple receivers. This makes both methods
mission and transfer telemetry information. The autonomous easily deployable with a low cost SDR and a computational
drones also perform active RF communication to transfer unit. (ii) The frequency domain detection and classification
live videos and telemetry messages. An RF drone detection provide the necessary information for a possible RF jammer.
system can detect a drone by monitoring the communication Both methods can perform fast detection and classification,
frequency spectrum. There are a few RF-based drone detec- even in the presence of overlapping signals both in the time
tion techniques proposed in the literature [14]–[19]. In [14], and frequency domain. They are more generalized and robust
the presence of a drone is detected by monitoring how fre- as they understand the position and type of signal.
quently the data packet is being transmitted at 2.4 GHz. Since The main contributions of this paper are the following.
most of the drones use different non-standardized protocols 1) A novel and realistic multi-signal dataset is created using
for their communication with their controller, the data packet nine commercial drones and non-drone signals (i.e.,
transmission rate varies from WiFi and other Access Points WiFi communication signals). The dataset will be made
(AP) [14]. In [15], the detection is performed by measuring public for future research.
the data packet length of a drone’s communication link. These 2) The YOLO-lite architecture is recreated from scratch
detection methods are inefficient since the detector can be and modified to perform the combined drone signal
easily spoofed by an application communicating with an AP detection and classification. The two-stage detection and
with the same packet transfer rate or having the same packet classification is performed using GoF spectrum sensing
length as a drone. In [16], a WiFi based drone surveillance and DRNN classifier.
method is proposed, where the identification is performed 3) The simultaneous multi-signal detection, spectrum local-
by a WiFi statistical fingerprinting technique. In [20], we ization and classification in the ISM band is presented
observed that several commercial drone’s GCS uses Frequency in this paper. We are the first to propose a frame-
Hopping Spread Spectrum (FHSS) transmission as the radio work for simultaneous multi-signal drone detection and
control (RC) signal, which should also be accounted for in the classification.
identification method. 4) The detection and classification performance of both
The detection and identification of a drone using the frameworks are evaluated on our dataset. Through
frequency signatures is presented in [17], [18], using Deep detailed comparisons, we show that the YOLO-lite
Neural Network (DNN) based classifiers. In [17], the author framework provides better detection performance com-
developed a dataset using three commercial drones and used a pared to the GoF sensing and a good classification
simple feedforward DNN to detect and identify them. In [18], performance comparable to the DRNN classifier.
the author presented the detection, identification and classifica- The rest of the paper is organized as follows: A math-
tion on the same dataset using a Convolutional Neural Network ematical model of the received signal in an ISM band is
(CNN). These studies were performed on a limited dataset presented in Section II. Section III provides an overview of
and the impact of noise on the detection performance was not the SoA techniques for two-stage and combined signals detec-
studied. Moreover, the detection performance in presence of tion and classification. The technical details to perform two
multiple signals or interference was not investigated. stage, and combined detection and classification is presented
We presented an RF-based drone detection using GoF spec- in Section IV. The dataset development and the experiment
trum sensing and DoA estimation using MUSIC algorithm strategies are explained in Section V. The performance analy-
in [21]. Drone signal detection using wideband CFAR-based sis is presented in Section VI and the concluding remarks are
energy detection and the feature extraction performance is provided in Section VII.
presented in [20]. In [22], we presented drone signal clas-
sification using a DRNN framework. The classification was
performed assuming a signal is already detected by a spectrum II. P ROBLEM S TATEMENT
sensing algorithm. A complete solution for drone detection and The ISM bands are generally populated by several homoge-
classification based on RF fingerprints was not presented in our neous and heterogeneous RF transmissions. The transmitters
previous works, which we address in this paper. We propose generally use spread spectrum technology to perform the
Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
BASAK et al.: COMBINED RF-BASED DRONE DETECTION AND CLASSIFICATION 113
10
250 WiFi
Wltoys
Parrot -20
ods are energy detection, eigenvalue detection, matched filter
300
350 -30
and cyclostationary feature detection [23]. The energy detec-
400
tion and eigenvalue detection are non-parametric methods. The
Tello
450
-40
energy-based detection have been widely used for decades
500
2390 2400 2410 2420 2430 2440 2450 2460 2470 2480
mainly due to its simplicity. The signal is detected if the
Frequency [MHz]
measured energy is greater than the threshold correspond-
Fig. 1. Presence of multiple signals at 2.4 GHz ISM band. ing to the specified false alarm rate. The eigenvalue detection
method estimates the ratio of maximum and minimum eigen-
communications. The FHSS transmissions are generally blind value and compares it with the threshold to determine if any
whereas the Direct Sequence Spread Spectrum (DSSS) trans- signal is present. The cyclostationary and matched filtering
missions are often cognitive in nature. Most commercial are parametric methods, they require perfect knowledge of the
drones use DSSS signal for video signal transmission. Unlike transmitted signal and can work better at lower signal-to-noise
FHSS transmissions, they perform sensing to find a free (or ratio (SNR) compared to the energy-based detection [24]. The
relatively free) channel before starting to transfer video signal. cyclostationary spectrum sensing method exploits the periodic-
One example of such heterogeneous transmission at 2.4 GHz ity introduced in the transmitted signal. The matched filtering
is shown in Fig. 1. As it can be seen from the figure, four method detects a signal by correlating a known template
transmissions are occurring at the same time where three trans- (extracted from the transmit signal) with the received signal.
mitters use the DSSS technology and one transmitter uses the Both methods are targeted towards specific (or known) signals
FHSS technology. and require a high computational cost.
The received signal from a drone can be expressed as: For the drone signal detection within two-stage detection
and classification, we choose wideband GoF based blind spec-
k
=K trum sensing algorithm [25]. The GoF sensing can provide
r (t) = yk (t) ∗ hk (t) + n(t), (1) better performance compared to the conventional energy detec-
k =1 tion using fewer samples of the received signal, at low SNR
where, yk (t) is complex baseband transmitted signal, hk (t) conditions and in presence of non-Gaussian noise [26]. The
is the time varying impulse response, k denotes the index of wideband GoF sensing uses DFT to divide the frequency band
transmitted signal, K is the total number of available transmit- into small frequency bins and perform narrowband GoF sens-
ters in the ISM band, n(t) is the Additive White Gaussian ing on each bin. In this paper, we have used Anderson-darling
Noise (AWGN). With the help of discrete Fourier trans- test statistic for the GoF sensing [25].
form (DFT), the complex time domain receive signal can
be converted into Mt consecutive segments, each with length B. Signal Classification
Nf (= FFT size). The magnitude of the DFT matrix gives us the DL methods have shown SoA performances in the classifi-
spectrogram matrix. This study aims to detect and classify drone cation of wireless signals and outperformed the conventional
and WiFi communication signals from the spectrogram matrix. classification methods. Some remarkable works have been
The spectrogram representation provides more information com- published [27], [28] in the past few years regarding the classi-
pared to a Power Spectral Density (PSD) or IQ representation of fication of modulated signals and device fingerprinting using
the signal. It enables the classifier to determine useful RF signal merely the raw received signals. In the recent years, CNN
features like frequency, bandwidth, dwell time and hop rate. frameworks have been widely investigated for wireless signal
Since commercial drones use pseudo-random number genera- recognition and classification problems [27], [29]. Among dif-
tors to generate the communication signals, the hopping pattern ferent variants of CNNs, the residual network-based CNN [30]
or the signal position will vary. The objective of our work is not to have shown great performance and outperformed other clas-
learn the hopping pattern, but rather to learn how the data/signal sifiers with equivalent network depth. In this paper, we adapt
is distributed in the spectrum to detect and classify them. Deep the DRNN proposed in [27] for the drone and WiFi signal
Learning (DL) algorithms learn the signal distribution which classification.
is dependent upon factors like: (i) frequency, (ii) bandwidth,
(iii) modulation, (iv) filtering parameters (v) device nonlineari-
C. Combined Detection and Classification
ties etc. We aim to utilize the DL algorithm to learn these factors
from the spectrogram matrix. The DNN based visual object detection and classification
techniques provide great tools such as YOLO [31] for the
combined RF signal detection, frequency localization and clas-
III. BACKGROUND
sification. Signal detection/classification from a spectrogram
A. Signal Detection image is analogous to the visual object detection and classifi-
The conventional spectrum sensing methods can be clas- cation. A spectrogram image provides the time and frequency
sified into parametric and non-parametric methods. The information of a spectrum instance, which can be utilized
Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
114 IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, VOL. 8, NO. 1, MARCH 2022
TABLE II
YOLO N ETWORK A RCHITECTURE
TABLE III
in time and frequency domain are used as the input. Since D RONES , R ADIO C ONTROLLERS AND W I F I S OURCES U SED FOR THE
DATASET D EVELOPMENT
recognizing such time-frequency domain spectral events are
relatively simpler than the visual object recognition [32], a
smaller network may be sufficient for the YOLO detection
and classification task. In our experiments, we have adapted
the YOLO-lite [36] architecture, which is a smaller and faster
network and can be deployed on a non-GPU computer. Our
adaptation of the YOLO-Lite architecture is shown in Table II.
We have used leaky-relu activation after each convolution
operation (C1 - C6) and linear activation on the C7 layer.
The max-pooling operation is performed after the convolu-
tions (C1 - C5). Finally, a fully connected layer is employed,
and sigmoid activation is performed.
A spectrogram dataset with a dimension of 256x256 is used Since a UAV controller generally uses a pseudorandom gen-
as the input. The network produces an output grid containing erator to generate the FHSS sequences as the RC signal, we
the detection probability, bounding box coordinates and class included all possible hop sequences from each controller in our
probabilities as shown following: database. The controllers were turned off and on several times
during data collection to inspect if the hop position changes
O/Pshape = SxSx(Bx[C, x, y, w, h] + P) (3) and include that as well in our database.
Here, S denotes the size of the grid. Each grid cell contains
B bounding boxes, the confidence score of the detection: c, B. Dataset Development
2D coordinates: x, y, width: w, height: h of the object and the To test the classification performance at lower SNRs, we
class probability P. We have used grid size 16, 2 bounding introduced AWGN to the signal in the simulation environment.
boxes and 10 different classes (Table III) for our tests. In the Generally, the SNR is calculated in the time domain by mea-
original YOLO-lite architecture output grid size of 8 was used, suring the transmission power of the signal. Since the signal
however, we found during our tests that in order to annotate bandwidth of different drones is different, it becomes diffi-
the DSSS spectrograms (e.g., Tello, Parrot), a higher number cult to calculate the SNR in the time domain. Therefore, we
of grid size is required. With the above specified parameters, calculated the signal SNR in the frequency domain as shown
the output dimension becomes 16 x 16 x 20. We have used in Fig. 5(a). Since the bandwidth and transmission power are
adam optimizer [37] to optimize the training loss. The training different for different drones, the calculated SNR was also
loss involves the minimization of the sum of mean squared slightly different for different noise values as presented in
error loss between the ground truth and network prediction. Fig. 5(b). To keep the performance analysis of different frame-
The complete training loss function provided in [31] is used works on the dataset simple, we considered the average SNR
for the training optimization. for the introduced AWGN values as shown in Fig. 5(c).
In [22], we evaluated the classification performance in
V. E XPERIMENTS Rician and Rayleigh fading simulation environment where
A. Experimental Setup the classifier was trained with AWGN faded dataset. We
did not observe any significant deviation in the classification
The drone and WiFi signals were recorded in an ane- performance due to the channel variation. Therefore, in this
choic chamber. For this experiment, we only considered the paper, we only evaluated the classification performance under
transmitters operating at 2.4 GHz. A universal software radio AWGN conditions.
peripheral (USRP) X310 was used with an omnidirectional
antenna. The receive sampling rate of 100 MSps was used
to receive instantaneously from the complete 2.4 GHz ISM C. Implementation Details
band. Nine commercial radio controllers with drones and two The GoF sensing was implemented in MATLAB. The
WiFi routers (Table III) were used to develop the dataset. DRNN framework was implemented using Tensorflow-Keras
The devices were placed seven meters apart from the receiver. and the YOLO-lite framework was implemented using TFlearn
Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
116 IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, VOL. 8, NO. 1, MARCH 2022
Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
BASAK et al.: COMBINED RF-BASED DRONE DETECTION AND CLASSIFICATION 117
Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
118 IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, VOL. 8, NO. 1, MARCH 2022
Fig. 11. Signal detection using GoF spectrum sensing in presence of multiple
signals.
Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
BASAK et al.: COMBINED RF-BASED DRONE DETECTION AND CLASSIFICATION 119
TABLE IV
C. Comparison Summary C OMPLEXITY C OMPARISON
The comparisons are summarized here.
• Signal Detection: We have obtained better signal detec-
tion performance with YOLO compared to the GoF
spectrum sensing on our dataset. At lower SNR regions,
the GoF sensing showed high false alarm rate compared
to the YOLO detection. Since YOLO is a supervised drones. There are two possible outcomes in such a case:
detection framework, the performance may deviate while (i) classifier will label it as an existing drone signal if the
detecting unknown signals. This issue can be resolved TX signal has a similar frequency fingerprint, (ii) classi-
with transfer learning using a small labeled dataset of fier will be confused and provide a very low classification
the new signal. On the contrary, the GoF sensing is a score for all classes. Similarly, some specific models of
blind spectrum sensing method, it is able to detect any UAV controllers may use a completely different hop-
signal present in the spectrum. ping sequence compared to another controller under the
• Signal Classification: The DRNN framework provided
same model. The YOLO detection and classification
better classification performance compared to the YOLO performance for such cases are not yet tested. We are
framework. It is expected from a deep residual network going to investigate and address these issues in our future
since it utilizes the skip connection feature in the archi- work.
tecture and the network is deeper than the YOLO
framework.
• Signal Localization and Feature Extraction: The signal VII. C ONCLUSION
localization and feature extraction is the best feature of In this paper, we performed drone signal detection, spectrum
the YOLO framework. The localization feature of YOLO localization and classification using two stages and combined
enables us to detect multiple signals simultaneously and detection and classification methods. Under the two-stage
extract the useful features from the received signal. It technique, we used the GoF sensing for the detection and
can provide the center frequency, bandwidth, hop rate the DRNN framework for the classification. The YOLO-lite
and dwell time of the detected signal. This information is framework was recreated from scratch to perform the com-
required for an RF jammer to perform soft neutralization bined drone RF signal detection, spectrum localization and
of a drone. With the two-stage detection and classifica- classification. A detailed performance comparison between
tion framework, it is not possible to extract all of these both of the techniques is presented using a novel drone dataset
features. The DRNN framework does not provide the that was prepared for this study. We obtained good detection
spectral position of the signal under classification. The and classification performances with both techniques. Since
GoF sensing provides the frequency and bandwidth of the classification is performed in a supervised manner, the
the signal, however, in presence of multiple signals, it is performance may deviate in presence of unknown or newer
going to be difficult to associate the signal features with drone signals which we mentioned in detail in the limitation
the classification label. discussion. In the future work, we are going to investigate the
• Complexity Analysis: A complexity analysis was per- unsupervised scenarios, since we are interested in developing a
formed to give an overview of the computational com- robust framework that can detect and classify all drone signals
plexity and the inference time required for each frame- irrespective of the dataset it is trained with.
work. The total network parameter (that is, trainable +
non trainable parameter), the mean inference time on total R EFERENCES
test samples and the mean prediction time per sample
[1] D. Sathyamoorthy, “A review of security threats of unmanned aerial
are presented in Table IV. The inference time calculation vehicles and mitigation steps,” J. Defence Security, vol. 6, no. 1,
was performed with the dataset used in the single sig- pp. 81–97, 2015.
nal detection scenario, where the test sample size was [2] B. Taha and A. Shoufan, “Machine learning-based drone detection
and classification: State-of-the-art in research,” IEEE Access, vol. 7,
2.1k samples. We used a batch size of 100 and per- pp. 138669–138682, 2019.
formed the test on Nvidia GeForce RTX 2080 Ti GPU. [3] G. Ding, Q. Wu, L. Zhang, Y. Lin, T. A. Tsiftsis, and Y.-D. Yao, “An
As it can be seen from Table IV, YOLO-lite is approx- amateur drone surveillance system based on the cognitive Internet of
Things,” IEEE Commun. Mag., vol. 56, no. 1, pp. 29–35, Jan. 2018.
imately 3.4 times faster than the DRNN framework. [4] F. Fioranelli, M. Ritchie, H. Griffiths, and H. Borrion, “Classification
The GoF spectrum sensing is a computationally simpler of loaded/unloaded micro-drones using multistatic radar,” Electron.
Lett., vol. 51, no. 22, pp. 1813–1815, 2015. [Online]. Available:
algorithm compared to the DRNN and YOLO-lite frame- https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/el.2015.3038
works. It does not require any high-end computational [5] J. Drozdowicz et al., “35 GHz FMCW drone detection system,” in Proc.
unit. During the SafeShore [39] project, we implemented 17th Int. Radar Symp. (IRS), 2016, pp. 1–4.
[6] A. Coluccia, G. Parisi, and A. Fascista, “Detection and classification of
the GoF sensing using C++ on an Odroid-XU4 platform multirotor drones in radar sensor networks: A review,” Sensors, vol. 20,
and performed real-time tests. no. 15, p. 4172, 2020. [Online]. Available: https://www.mdpi.com/1424-
• Limitation: One of the limitations of both proposed meth- 8220/20/15/4172
[7] Z. Zhang, Y. Cao, M. Ding, L. Zhuang, and W. Yao, “An intruder detec-
ods is the classification of completely unknown signals. tion algorithm for vision based sense and avoid system,” in Proc. Int.
Since the classification is performed in a supervised Conf. Unmanned Aircr. Syst. (ICUAS), 2016, pp. 550–556.
manner, the classifier may not be able to classify or [8] S. R. Ganti and Y. Kim, “Implementation of detection and tracking
mechanism for small uas,” in Proc. Int. Conf. Unmanned Aircr. Syst.
provide a label to the transmitted signals from newer (ICUAS), 2016, pp. 1254–1260.
Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.
120 IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, VOL. 8, NO. 1, MARCH 2022
[9] R. Stolkin, D. Rees, M. Talha, and I. Florescu, “Bayesian fusion of [32] T. O’Shea, T. Roy, and T. C. Clancy, “Learning robust general radio
thermal and visible spectra camera data for mean shift tracking with signal detection using computer vision methods,” in Proc. 51st Asilomar
rapid background adaptation,” in Proc. IEEE SENSORS, 2012, pp. 1–4. Conf. Signals Syst. Comput., 2017, pp. 829–832.
[10] A. Coluccia et al., “Drone-vs-bird detection challenge at IEEE [33] E. Fonseca, J. F. Santos, F. Paisana, and L. A. DaSilva, “Radio
AVSS2019,” in Proc. 16th IEEE Int. Conf. Adv. Video Signal Based access technology characterisation through object detection,” Comput.
Surveillance (AVSS), 2019, pp. 1–7. Commun., vol. 168, pp. 12–19, Feb. 2021. [Online]. Available:
[11] M. Nijim and N. Mantrawadi, “Drone classification and identification https://www.sciencedirect.com/science/article/pii/S0140366420320272
system by phenome analysis using data mining techniques,” in Proc. [34] S. Couturier and D. Rauschen, “Energy detection based on long-term
IEEE Symp. Technol. Homeland Security (HST), 2016, pp. 1–5. estimation of Gaussian noise distribution,” in Proc. 8th Karlsruhe
[12] M. Benyamin and G. H. Goldman, Acoustic Detection and Tracking of Workshop Softw. Radios, 2014, pp. 89–95.
a Class I UAS with a Small Tetrahedral Microphone Array, Army Res. [35] S. Ren, K. He, X. Zhang, and J. Sun, “Delving deep into rectifiers:
Lab., Adelphi, MD, USA, Sep. 2014. Surpassing human-level performance on ImageNet classification,” in
Proc. ICCV, 2015, pp. 1026–1034.
[13] J. Busset et al., “Detection and tracking of drones using advanced
[36] R. Huang, J. Pedoeem, and C. Chen, “YOLO-LITE: A real-time object
acoustic cameras,” in Proc. Unmanned/Unattended Sens. Sens. Netw.
detection algorithm optimized for non-GPU computers,” in Proc. IEEE
XI Adv. Free-Space Opt. Commun. Techn. Appl., vol. 9647, Oct. 2015,
Int. Conf. Big Data (Big Data), 2018, pp. 2503–2510.
Art. no. 96470F.
[37] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
[14] P. Nguyen, M. Ravindranatha, A. Nguyen, R. Han, and T. Vu, 2015. [Online]. Available: arXiv:1412.6980.
“Investigating cost-effective RF-based detection of drones,” in Proc. [38] M. Abadi et al. (2015). TensorFlow: Large-Scale Machine Learning on
2nd Workshop Micro Aerial Veh. Netw. Syst. Appl. Civilian Use, Heterogeneous Systems. [Online]. Available: https://www.tensorflow.org/
2016, pp. 17–22. [Online]. Available: https://doi.org/10.1145/2935620. [39] European Commission. Horizon2020. The SafeShore Project. [Online].
2935632 Available: http://safeshore.eu
[15] P. Kosolyudhthasarn, V. Visoottiviseth, D. Fall, and S. Kashihara, “Drone
detection and identification by using packet length signature,” in Proc. Sanjoy Basak received the M.Sc. degree in elec-
15th Int. Joint Conf. Comput. Sci. Softw. Eng. (JCSSE), 2018, pp. 1–6. trical engineering and information technology from
[16] I. Bisio, C. Garibotto, F. Lavagetto, A. Sciarrone, and S. Zappatore, the Karlsruhe Institute of Technology, Karlsruhe,
“Blind detection: Advanced techniques for WiFi-based drone surveil- Germany, in 2016. He is currently pursuing the joint
lance,” IEEE Trans. Veh. Technol., vol. 68, no. 1, pp. 938–946, Jan. 2019. Doctoral degree with the Royal Military Academy
[17] M. F. Al-Sa’d, A. Al-Ali, A. Mohamed, T. Khattab, and A. Erbad, “RF- and the Department of Electrical Engineering, KU
based drone detection and identification using deep learning approaches: Leuven. He joined the Royal Military Academy,
An initiative towards a large open source drone database,” Future Gener. Belgium, as a Researcher in 2016. His research
Comput. Syst., vol. 100, Nov. 2019, pp. 86–97. interests include deep learning algorithms for wire-
[18] S. Al-Emadi and F. Al-Senaid, “Drone detection approach based on less signal detection and classification.
radio-frequency using convolutional neural network,” in Proc. IEEE Int.
Conf. Inform. IoT Enabling Technol. (ICIoT), 2020, pp. 29–34. Sreeraj Rajendran received the master’s degree
[19] M. M. Azari, H. Sallouha, A. Chiumento, S. Rajendran, E. Vinogradov, in communication and signal processing from the
and S. Pollin, “Key technologies and system trade-offs for detection and Indian Institute of Technology Bombay, India, in
localization of amateur drones,” IEEE Commun. Mag., vol. 56, no. 1, 2013, and the Ph.D. degree from KU Leuven,
pp. 51–57, Jan. 2018. Belgium, in 2019, where he is a Postdoctoral
[20] P. Stoica, S. Basak, C. Molder, and B. Scheers, “Review of counter-uav Researcher with the Networked Systems Group.
solutions based on the detection of remote control communication,” in In 2013, he was a Senior Design Engineer with
Proc. 13th Int. Conf. Commun. (COMM), 2020, pp. 233–238. the Baseband Team, Cadence (Tensilica). He was
[21] S. Basak and B. Scheers, “Passive radio system for real-time drone also an ASIC Verification Engineer with Wipro
detection and DoA estimation,” in Proc. Int. Conf. Military Commun. Technologies from 2007 to 2010. His main research
Inf. Syst. (ICMCIS), May 2018, pp. 1–6. interests include machine learning algorithms for
[22] S. Basak, S. Rajendran, S. Pollin, and B. Scheers, “Drone classifica- wireless spectrum awareness and low power wireless sensor networks.
tion from RF fingerprints using deep residual nets,” in Proc. Int. Conf. Sofie Pollin (Senior Member, IEEE) received the
Commun. Syst. Netw. (COMSNETS), 2021, pp. 548–555. Ph.D. degree (Hons.) from KU Leuven, Leuven,
[23] H. Sun, A. Nallanathan, C.-X. Wang, and Y. Chen, “Wideband spec- Belgium, in 2006. From 2006 to 2008, she con-
trum sensing for cognitive radio networks: A survey,” IEEE Wireless tinued her research on wireless communication,
Commun., vol. 20, no. 2, pp. 74–81, Apr. 2013. energy-efficient networks, cross-layer design, coex-
[24] I. Kakalou, D. Papadopoulou, T. Xifilidis, K. E. Psannis, K. Siakavara, istence, and cognitive radio with UC Berkeley. In
and Y. Ishibashi, “A survey on spectrum sensing algorithms for cognitive November 2008, she returned to IMEC, Leuven,
radio networks,” in Proc. 7th Int. Conf. Modern Circuits Syst. Technol. where she become a Principal Scientist with the
(MOCAST), 2018, pp. 1–4. Green Radio Team. Since 2012, she has been a
[25] B. Scheers, D. Teguig, and V. Le Nir, “Wideband spectrum sensing tenure track Assistant Professor with the Department
technique based on goodness-of-fit testing,” in Proc. Int. Conf. Military of Electrical Engineering, KU Leuven. Her research
Commun. Inf. Syst. (ICMCIS), 2015, pp. 1–6. centers around networked systems that require networks that are ever more
[26] D. Teguig, B. Scheers, V. Le Nir, and F. Horlin, “Spectrum sensing dense, heterogeneous, battery-powered, and spectrum constrained. She is a
method based on the likelihood ratio goodness of fit test under noise fellow of the BAEF and Marie Curie.
uncertainty,” Int. J. Eng. Res. Technol., vol. 3, no. 9, pp. 488–494, 2014. Bart Scheers was born in Rumst, Belgium, in
[27] T. J. O’Shea, T. Roy, and T. C. Clancy, “Over-the-air deep learning November 1966. He received the M.S. degree in
based radio signal classification,” IEEE J. Sel. Topics Signal Process., engineering, with a specialization in communica-
vol. 12, no. 1, pp. 168–179, Feb. 2018. tion from the Royal Military Academy, Brussels,
[28] S. Rajendran, W. Meert, D. Giustiniano, V. Lenders, and S. Pollin, “Deep Belgium, in 1991, and the joint Ph.D. degree from
learning models for wireless signal classification with distributed low- the Université Catholique de Louvain, Ottignies-
cost spectrum sensors,” IEEE Trans. Cogn. Commun. Netw., vol. 4, no. 3, Louvain-la-Neuve, Belgium, and Royal Military
pp. 433–445, Sep. 2018. Academy, in 2001, where he presented his Ph.D.
[29] S. Riyaz, K. Sankhe, S. Ioannidis, and K. Chowdhury, “Deep learning dissertation on the use of ground penetrating radars
convolutional neural networks for radio identification,” IEEE Commun. in the field of humanitarian demining. He was an
Mag., vol. 56, no. 9, pp. 146–152, Sep. 2018. Officer with the Territorial Signal Unit, Belgian
[30] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for Army. In 1994, he was an Assistant of Signal Processing with Royal Military
image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Academy, where he has been a Military Professor with the Communications
(CVPR), 2016, pp. 770–778. Information Systems and Sensors Department, since 2003, and is also the
[31] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look Director of the Research Unit on Radio Networks. His current domains of
once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. interest are mobile ad hoc networks (layers two and three), cognitive radio,
Vis. Pattern Recognit. (CVPR), 2016, pp. 779–788. and Internet of Things.
Authorized licensed use limited to: Universiti Tun Hussein Onn Malaysia. Downloaded on April 18,2024 at 06:40:27 UTC from IEEE Xplore. Restrictions apply.