Automatic LPI Radar Waveform Recognition Using CNN: IEEE Access January 2018
Automatic LPI Radar Waveform Recognition Using CNN: IEEE Access January 2018
Automatic LPI Radar Waveform Recognition Using CNN: IEEE Access January 2018
net/publication/322201609
CITATIONS READS
26 606
3 authors, including:
All content following this page was uploaded by Seung-Hyun Kong on 09 February 2018.
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2017.DOI
ABSTRACT Detecting and classifying the modulation scheme of the intercepted noisy LPI (low probabil-
ity of intercept) radar signals in real-time is a necessary survival technique required in the electronic warfare
systems. Therefore, LPI radar waveform recognition technique (LWRT) has gained an increasing attention
recently. In this paper, we propose a convolutional neural network (CNN) based LWRT, where the input and
hyper-parameters of the CNN, such as the input size, number of filters, filter size, and number of neurons
are designed based on various signal conditions to guarantee the maximum classification performance. In
addition, we propose a sample averaging technique (SAT) to efficiently reduce the large computational cost
required when the intercept receiver needs to process a large amount of signal samples to improve the
detection sensitivity. We demonstrate the performance of the proposed LWRT with numerous Monte Carlo
simulations based on the simulation conditions used in the recent LWRTs introduced in the literature. In
addition, it is testified that the proposed LWRT offers significant improvement, such as robustness to noise
and recognition accuracy, over the recent LWRTs.
INDEX TERMS convolutional neural network, low probability of intercept, radar waveform recognition.
VOLUME 4, 2016 1
2169-3536 (c) 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2017.2788942, IEEE Access
S.-H. Kong and M. Kim et al.: Automatic LPI Radar Waveform Recognition using CNN
2169-3536 (c) 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2017.2788942, IEEE Access
S.-H. Kong and M. Kim et al.: Automatic LPI Radar Waveform Recognition using CNN
Preprocessing-II
B
Preprocessing-I
2π
FIGURE 1: Proposed LPI radar waveform recognition system. Frank constant M (i − 1)(j − 1)
π
P1 constant −M [(M − (2j − 1))][(j − 1)M + (i − 1)]
π
frequency (IF) fI and then sampled at fs = (1/Ts ) to yield P2 constant − 2M [2i − 1 − M ][2j − 1 − M ]
the signal samples are collected for a signal pulse interval T2 constant mod Nps (Nsi (kTs ) − jτpw ) τpw 2 , 2π
j N B(kT )2 k
(τpw ) and the coarse estimate of the carrier frequency is T3 constant mod N2π ps
2τ
s
, 2π
ps pw
obtained. Note that the preprocessing-I block is an optional j N B(kT )2 N B(kT )
k
ps s
T4 constant mod 2π
− ps 2 s , 2π
function used only when fs /(fI +Bs /2) is by multiple times Nps 2τpw
2169-3536 (c) 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2017.2788942, IEEE Access
S.-H. Kong and M. Kim et al.: Automatic LPI Radar Waveform Recognition using CNN
1.5
SNR loss [dB]
Average
1 Standard deviation
0.5
0
2 3 4 5 6
Nsc
2169-3536 (c) 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2017.2788942, IEEE Access
S.-H. Kong and M. Kim et al.: Automatic LPI Radar Waveform Recognition using CNN
other half is from a −1 chip. Therefore, in [15], the sample 1000 1000
60
averaging is performed for each possible sample offset and
frequency index
800
frequency index
800 50
includes another process to find the sample offset that leads 40
600 600
to the highest post-correlation SNR. However, exploiting the 30
SAT to reduce the LPI radar signal sample size is different 400 400
20
from GPS. For example, while a code chip has a very short 200 200
10
time interval (almost a micro-second) and the unknown fre- 0 0
quency is small in GPS, a subcode that may cause destructive 0 500 1000 0 500 1000
samples index samples index
averaging has a relatively very long interval but the unknown (a) LFM (b) Costas
frequency can be larger than GPS in the LPI radar waveform 1000 1000
with pulse compression. This is the reason for limiting Na to
frequency index
frequency index
800 800
maintain at least Nsc averaged samples per a carrier cycle
600 600
at the highest allowed frequency (i.e., fI + Bs /2) by the
400 400
intercept receiver.
200 200
0 0
0 500 1000 0 500 1000
B. TFA TECHNIQUE FOR CWD-TFI samples index samples index
(c) BPSK (d) Frank
1000 1000
This subsection provides the essential description of the
CWD used to produce the TFI of the intercepted signal in the
frequency index
frequency index
800 800
TFA block and discusses the unique patterns of the twelve 600 600
LPI waveforms based on their CWD-TFIs. 400 400
frequency index
τ =−∞ µ=−∞ 800 800
#
600 600
× y(µ + τ )y(µ − τ ) , (5)
400 400
√ 200 200
where α = −1 is the imaginary unit, ` and ω are the time
0 0
and angular frequency index variables, respectively, τ and µ 0 500 1000 0 500 1000
are discrete variables, and ξ and ω are continuous variables. samples index samples index
2 2
Note that the exponential kernel function ϕ(ξ, τ ) = e−ξ τ /σ 1000
(g) P3
1000
(h) P4
has a scaling factor σ that is an effective parameter for low
pass filtering in the ambiguity function (AF) of the cohen0 s
frequency index
frequency index
800 800
class. Since the CWD is generated from the AF using the ker- 600 600
nel function and the 2-dimensional (2D) Fourier transform, 400 400
the CWD of the intercepted signal y[k] comprised of multiple 200 200
frequency components has high power spectrum intensity
0 0
near the center frequency and some power spectrum compo- 0 500 1000 0 500 1000
nents away from the center frequency due to the auto-terms samples index samples index
and (non-zero frequency) cross-terms in the AF, respectively (i) T1 (j) T2
1000 1000
[18]. The scaling factor σ in (5) can be used to lessen the
frequency index
frequency index
800 800
cross-terms in the CWD, while the frequency resolution in
the CWD is degraded. In the proposed technique, we use 600 600
2169-3536 (c) 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2017.2788942, IEEE Access
S.-H. Kong and M. Kim et al.: Automatic LPI Radar Waveform Recognition using CNN
index
100
100
frequency index
1500
sidelobe level in the autocorrelation result is not larger than
frequency index
1500
8080
frequency
1/ρ of the main lobe, where the length of the sequence is 1000
1000 6060
ρ. This characteristic provides an advantage to cope with the 4040
500
500
target masking problem [13], but BPSK modulated signals 2020
can be easily detected due to the simple modulation, so it 1000 1500
500 500 1000 1500 2000
2000 20 20 4040 6060 8080 100 120
100 120
is not used for an LPI radar waveform modulation [2] in samplesindex
sample index samples
sample index
index
practice. However, its CWD-TFI in Fig. 4(c) has a noticeable Time- Cut off the
Resize
similarity to the CWD-TFI of the T1 code as shown in Frequency zero-padding
& Normalize
Classifier
Analysis part
Fig. 4(i). Therefore, we include the BPSK modulation for a
performance comparison as it is included in the recent studies 2000
2000
1500
1500
frequency index
frequency
4) Polyphase 1000
1000
6 VOLUME 4, 2016
2169-3536 (c) 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2017.2788942, IEEE Access
S.-H. Kong and M. Kim et al.: Automatic LPI Radar Waveform Recognition using CNN
Conv-1 Pooling-1
Size: 7×7 Size: 2×2 Output
FC-1 FC-2 Softmax
Stride: 1 Stride: 2 Conv-2 Pooling-2 Layer
Zero-pad: 3 Zero-pad: 0 Size: 3×3 Size: 2×2
Flatten
Stride: 1 Stride: 2
Zero-pad: 1 Zero-pad: 0
Modulation
Type
Table 1 [2]. In this paper, we consider two states (i.e., 0 and we crop the CWD-TFI to remove the pixels in the CWD-
π) for the phase offset φ[k], which is the most popular and TFI generated by the zero-padding. The cropped CWD-TFI
applicable case, because of the simplicity in generating the is then resized to further reduce the input size appropriately
waveforms [2], [19]. Among the polytime modulations, T1 to the CNN, which is to lessen the computational cost for
and T3 codes have φ[k] = 0 at the beginning and φ[k] = 0 or CNN while guaranteeing good classification performance.
π in the middle and at the end of the code. Due to this fact, T1 However, we should not reduce the cropped CWD-TFI to a
and T3 codes have constant frequency at the beginning and too small size, because the details of the object in the CWD-
symmetrical power spectrum distribution across the center TFI can be lost, which may result in a significant perfor-
frequency in the middle and at the end of the code. As a mance degradation. Therefore, we need a balance between
result, the CWD-TFIs of T1 and T3 codes have ‘<’ shapes, the computational cost and the classification performance in
as shown in Fig. 4(i) and 4(k). On the other hand, T2 and T4 regard to the input size of the CNN. In the proposed LWRT,
codes have φ[k] = 0 at the center of the code and φ[k] = 0 we use P = Q = 2048 for the rectangular window WP (τ )
or π at both ends of the code, which result in ‘X’ shapes and WQ (µ) so that the produced CWD-TFI has 2048×2048
in the CWD-TFIs as shown in Fig.4(j) and 4(l). Note that pixels, and we resize the TFI into 128×128 pixels using the
the CWD-TFIs for BPSK and T1 shown in Fig. 4(c) and nearest-neighbor interpolation (NNI) technique [20]. Note
Fig. 4(i), respectively, are similar and can be difficult to that the cubic interpolation (CI) technique [21] may be
distinguish when the power spectrum in the tail (i.e., samples used in the image resizing, however, we do not use the
with indices larger than 500) are buried under a strong noise CI technique, since each resulting pixel may be corrupted
(i.e., low SNR cases). Similarly, T2 and T4 codes shown in by multiple noisy pixels in the original image. The image
Fig. 4(j) and 4(l), respectively, can be difficult to distinguish cropping and resizing processes are illustrated in Fig. 5.
when the power spectrum at the both sides (i.e., samples with
indices less than 300 and those larger than 700) are buried IV. DESIGN OF THE PROPOSED CNN
under a strong noise (i.e., low SNR cases). In general, CNN shows a superb performance in image
classification problems such as handwriting recognition and
C. PREPROCESSING-II: IMAGE RESIZEING various object recognition including human faces, license
Studies in [7], [8] apply noise filtering to the input image to plates, hand gesture, logos, and texts [22]. This is the reason
lessen the effect of the noise and, thus, to improve SNR, but we employ the CNN in the proposed LWRT, and this section
it may result in a loss of signal information contained in the introduces the design of the proposed CNN based classifica-
details of the image. Therefore, preserving the details in the tion technique.
CWD-TFI (e.g., the staircase patterns in the Frank and P1 As shown in Fig. 6, CNN consists of a (image) feature ex-
codes, weaker intensities at both ends of the P4, at the tail of traction block and a classification block. Because the feature
BPSK and T1, and the sides of T2 and T4) is critical to the extraction block is integrated inside, CNN does not require
classification performance. Therefore, we use intact CWD- any prior feature extraction function [11]. In addition, the
TFI for any level of the noisy signal in the proposed LWRT. convolution and pooling processes in the CNN make the
In the proposed LWRT, we collect signal samples of less CNN robust to the geometrical distortions, such as scaling,
than 2048 and zero-pad to make an input size of 2048 shift, and rotation, and to the noise in the input image [11].
samples in order to use the FFT algorithm in the CWD. Then, Considering the number of classes to classify, the fact that
VOLUME 4, 2016 7
2169-3536 (c) 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2017.2788942, IEEE Access
S.-H. Kong and M. Kim et al.: Automatic LPI Radar Waveform Recognition using CNN
First Conv Second Conv First Conv Second Conv Number of Average pcc for
Input size
Filter number Filter number Filter size Filter size neurons SNR=−4, −6, −8dB
64×64 10 20 5×5 5×5 400 83.08
64×64 20 40 5×5 5×5 400 84.11
64×64 30 60 5×5 5×5 400 89.56
64×64 40 80 5×5 5×5 400 86.83
128×128 10 20 5×5 5×5 400 89.64
128×128 20 40 5×5 5×5 400 87.86
128×128 30 60 5×5 5×5 400 90.08
128×128 40 80 5×5 5×5 400 89.19
256×256 10 20 5×5 5×5 400 88.22
256×256 20 40 5×5 5×5 400 88.72
256×256 30 60 5×5 5×5 400 87.72
256×256 40 80 5×5 5×5 400 87.47
3×3 3×3 400 88.78
5×5 3×3 400 90.89
5×5 5×5 400 90.08
7×7 3×3 400 92.78
128×128 30 60 7×7 5×5 400 88.86
7×7 7×7 400 91.75
9×9 3×3 400 91.89
9×9 5×5 400 87.72
9×9 7×7 400 91.94
9×9 9×9 400 89.50
100 91.94
200 93.67
300 92.53
128×128 30 60 7×7 3×3 400 92.78
500 89.19
600 90.72
700 90.25
the input image is in a gray scale, and the subtle shapes [25], but also because of the high complexity of the image
of the image objects of the twelve LPI radar waveforms features of the twelve LPI radar waveforms shown in Fig. 4.
in the CWD-TFIs, we can observe that there are multiple However, due to a large number of neurons in the FC layer,
similarities between the hand-writing recognition problem we may have an overfitting problem, for which we employ
[11] and the problem considered in this paper. Therefore, a Dropout layer between the two FC layers denoted as FC-1
we start with the basic structure of the CNN studied in [11], and FC-2 in Fig. 6.
[23], [24], [25] to develop an appropriate CNN structure for In the first step of the design, we determine the input size of
the proposed LWRT. The basic structure of the CNN can be the CNN and the number of convolution filters with various
described in a sequence of functions as Input−Conv−ReLU simulations, where the input size is related to the resolution of
− Pooling − Conv − ReLU − Pooling − FC − Dropout − FC, the objects in the CWD-TFI, and the number of convolution
where Conv represents the convolution layer, ReLU is the filters is to find elementary visual features such as oriented
rectified linear unit, Pooling is the pooling layer, FC denotes edges, end-points, and corners. The visual features are then
the full-connected layer, and Dropout is the dropout layer. combined by the subsequent layers to detect higher-order
Based on the basic structure, we design the hyper-parameters, features [11]. We develop multiple LWRTs shown in Fig.
such as the input size, convolution filter size, the number of 1 for various input sizes to the CNN, such as [64×64],
Conv feature maps, and the number of neurons in the FC, to [128×128], and [256×256], and for various numbers of con-
find the optimal structure for the LPI waveform classification volution filters used in the first/second convolutional layers,
problem based on numerous Monte Carlo simulations for such as 10/20, 20/40, 30/60, and 40/80, and test the LWRTs
various conditions. for twelve LPI radar waveforms with low SNR, such as
Due to the large number of independent parameters to −4dB, −6dB, and −8dB. Note that the choice of the test
determine, we exploit the Conv filter size [5x5] made in SNR range is effective since the performance of the LWRTs
previous studies [11], [23], [24] for an initial choice. How- starts to degrade for SNRs lower than −4dB and drops to
ever, we use 400 as an initial value (large enough) for the a very low percentage of the correct classification (pcc) for
number of neurons in the FC layer, which is different from SNRs lower than −8dB.
the number of neurons (i.e., 100 ∼ 200) used in the previous Table 2 shows some of the simulation results to testify the
studies [11], [23], [24], [25]. This initial choice of neurons is design of the proposed CNN. As shown, the input size of
not only because the input CWD-TFI size is much larger than [128×128] and the numbers of convolution filters for the first
the image size used in the previous studies [11], [23], [24], and second layers equal to 30 and 60, respectively, produce
8 VOLUME 4, 2016
2169-3536 (c) 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2017.2788942, IEEE Access
S.-H. Kong and M. Kim et al.: Automatic LPI Radar Waveform Recognition using CNN
pcc [%]
80 Proposed technique
No 100 0 0 0 0 0 0 0
Mod (95) (0) (5) (0) (0) (0) (0) (0)
0 100 0 0 0 0 0 0 Technique in [8]
BFSK 60
(5) (95) (0) (0) (0) (0) (0) (0) -10 -8 -6 -4 -2 0 2 4 6 8 10
SNR [dB]
0 0 100 0 0 0 0 0 (a) Overall
LFM
(5) (0) (95) (0) (0) (0) (0) (0)
0 0 0 100 0 0 0 0 100 100
Frank
(0) (0) (0) (85) (5) (0) (10) (0)
pcc [%]
0 0 0 0 100 0 0 0
pcc [%]
80
P1 80
(0) (0) (0) (5) (85) (0) (0) (10)
60
0 0 0 0 0 100 0 0
P2
(0) (0) (0) (0) (0) (95) (5) (0) 60
40
-10 -8 -6 -4 -2 0 2 4 6 8 10
0 0 0 0 0 0 100 0 -10 -8 -6 -4 -2 0 2 4 6 8 10
P3 SNR [dB] SNR [dB]
(0) (0) (0) (10) (0) (0) (85) (5) (b) LFM (c) BPSK
0 0 0 0 0 0 0 100
P4
(0) (0) (0) (0) (10) (5) (0) (85) 100 100
pcc [%]
pcc [%]
80
the best result. In the next 10 rows (i.e., from the 13th row to 60
the 22nd row) of the Table 2, we present the test results for 90 40
-10 -8 -6 -4 -2 0 2 4 6 8 10 -10 -8 -6 -4 -2 0 2 4 6 8 10
various filter sizes of the first and the second convolutional SNR [dB] SNR [dB]
layers, where it is assumed that the filter size of the first layer (d) Costas (e) Frank
is larger or equal to that of the second layer [11], [25]. It 100 100
turns out that the filter sizes of [7×7] and [3×3] for the two
pcc [%]
pcc [%]
80 80
layers produce the best result. As for the stride size in the
convolutional layers, since the CNN needs to extract features 60 60
pcc [%]
80 80
in the nonlinear layer (omitted in Fig. 6 for illustrational
60 60
simplicity), we use ReLU as a common choice. The last 7
rows of Table II shows that the performance of the LWRT 40
-10 -8 -6 -4 -2 0 2 4 6 8 10
40
-10 -8 -6 -4 -2 0 2 4 6 8 10
is the best when there are 200 neurons in the FC-2. Note SNR [dB] SNR [dB]
(h) P3 (i) P4
that there is a Dropout layer (of 50% rate) omitted in Fig.
FIGURE 7: Comparison with the LWRT in [8].
6 for illustrational simplicity, between the FC-1 and FC-2 to
avoid a possible overfitting problem [27]. The performance
with and without the Dropout layer is 93.67% and 91.51%, samples are collected for each trial, and the tested SNR
respectively. The details of the final design of the CNN are levels are from −10dB to 10dB. To apply the SAT, we use
described in Fig. 6. Na =10 which results in Nsc ≥ 4 and the CWD-TFIs of the
averaged signals are used in the testing phase. Table 3 shows
V. PERFORMANCE DEMONSTRATION AND the simulation result for SNR = −10dB, which is the same
COMPARISON TO THE CONVENTIONAL TECHNIQUES condition introduced in [9]. We observe that the proposed
In this section, we compare the performance of the proposed LWRT achieves perfect performance in classifying all of the
LWRT to the recent LWRTs [6], [8], [9] introduced in the eight waveform modulations, whereas the performance of the
literature with numerous Monte Carlo simulations. LWRT in [9], shown within parentheses, is 90% in average.
A. PERFORMANCE COMPARISON TO THE LWRT IN [9] B. COMPARISON WITH THE LWRT IN [8]
The first performance comparison is between the proposed The second performance comparison is between the pro-
LWRT with the SAT and the LWRT in [9] that utilizes FRT to posed LWRT without the SAT and the recent LWRT in [8].
reduce the computational cost for processing a large amount The waveform modulation schemes considered in [8] are
of signal samples. The waveform modulations considered LFM, BPSK, Costas, Frank, P1, P2, P3, and P4 codes, and the
are no modulation, BFSK, LFM, Frank, P1, P2, P3, and same simulation conditions in [8] are used for the simulations
P4 codes, and the same simulation conditions including the of the proposed LWRT. Fig. 7 shows that the classification
number of signal samples in [9] are used in the simulations performance of the proposed LWRT is superior to the LWRT
of the proposed LWRT with the SAT. Therefore, N=10, 000 in [8] by about 3dB in overall.
VOLUME 4, 2016 9
2169-3536 (c) 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2017.2788942, IEEE Access
S.-H. Kong and M. Kim et al.: Automatic LPI Radar Waveform Recognition using CNN
100
pcc [%]
Proposed technique
Technique in [6]
80
-10 -8 -6 -4 -2 0 2 4 6 8 10
SNR [dB]
(a) Overall
100 100
pcc [%]
pcc [%]
90
80
FIGURE 8: CWD-TFI of BPSK signals in [6]. 80
70 60
-10 -8 -6 -4 -2 0 2 4 6 8 10 -10 -8 -6 -4 -2 0 2 4 6 8 10
SNR [dB] SNR [dB]
(b) LFM (c) BPSK
100 100
pcc [%]
pcc [%]
90
80
90 70
-10 -8 -6 -4 -2 0 2 4 6 8 10 -10 -8 -6 -4 -2 0 2 4 6 8 10
SNR [dB] SNR [dB]
(d) Costas (e) Frank
100 100
(a) (b)
pcc [%]
pcc [%]
90 80
70 40
-10 -8 -6 -4 -2 0 2 4 6 8 10 -10 -8 -6 -4 -2 0 2 4 6 8 10
SNR [dB] SNR [dB]
C. COMPARISON WITH THE LWRT IN [6] (f) T1 (g) T2
pcc [%]
90
simulations of the proposed LWRT. However, there are two SNR [dB] SNR [dB]
(h) T3 (i) T4
simulation conditions we do not follow [6]. Firstly, we do not
FIGURE 10: Comparison with the LWRT in [6].
assume that the BPSK waveform samples are obtained for
multiple consecutive periods as assumed in [6]. Fig. 8 shows
an example of a CWD-TFI of the BPSK waveform repeating
5 times. Therefore, the generated CWD-TFI of the BPSK should be as shown in Fig. 4(l). As a result, the correct CWD-
waveform to test the proposed LWRT should be similar to TFI of T4 code can be confusing with that of T2 code shown
the noise version of Fig. 4(c). Note that this condition for in Fig. 4(j) in noisy conditions, but the shape of the incorrect
the proposed LWRT makes the classification more difficult, CWD-TFI of the T4 code in Fig. 9(a) may be not easily
since the CWD-TFI of the repeating BPSK has a unique confused with other LPI radar waveforms shown in Fig.4
and distinctive shape when compared to other waveforms.
Secondly, we do not agree with the CWD-TFI of T4 code Fig. 10 shows the result of the proposed LWRT (without
shown in [6], which is copied in Fig. 9(a). In fact, the CWD- the SAT) compared to the result in [6], where the proposed
TFI of T4 code shown in Fig. 9(b), very similar to Fig. 9(a), LWRT has about 5dB improvement in overall. Notice that
is generated by using the pseudo-code given in the appendix there is no result from [6] for SNR below −4dB. This signifi-
of [2], where the phase offset follows cant improvement is because the proposed LWRT utilizes the
grayscale input that has amplitude information preserved in
( ) the CWD-TFI, and the input size and the hyper-parameters
2π j Nps B(kTs )2 Nps fc (kTs ) k of the CNN are designed to maximize the classification
φ[k] = mod − ,2π .
Nps 2τpw 2 performance. These improvements also allow the proposed
LWRT to classify a larger number of modulation schemes
However, the correct formula for the phase offset of the T4 (i.e., 12 in total) as presented in the next subsection.
code is stated in Table 1, which is the exact mathematical
definition in [2] and [14], and the CWD-TFI of T4 code
10 VOLUME 4, 2016
2169-3536 (c) 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2017.2788942, IEEE Access
S.-H. Kong and M. Kim et al.: Automatic LPI Radar Waveform Recognition using CNN
Radar
Parameters Value of Range
waveforms
100
fc U (fs /6, fs /5)
80
LFM B U (fs /20, fs /16)
pcc [%]
N U [512, 1920] 60 With SAT
40
FH sequence {3, 4, 5, 6} 20
Without SAT
Costas fmin U (fs /30, fs /24) 0
-20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10
N U [512, 1920] SNR [dB]
(a) Overall
Lc {7, 11, 13}
100 100
BPSK fc U (fs /6, fs /5)
80 80
Ncc U [20, 24]
pcc [%]
pcc [%]
60 60
M {6, 7, 8} 0
-20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10
0
-20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10
SNR [dB] SNR [dB]
fc U (fs /6, fs /5) (b) LFM (c) BPSK
P2 Ncc {3, 4, 5}
100 100
M {6, 8}
80 80
pcc [%]
pcc [%]
60 60
fc U (fs /6, fs /5)
P3, P4 Ncc {3, 4, 5} 40 40
0 0
-20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10
fc U (fs /6, fs /5) SNR [dB] SNR [dB]
T1, T2 Ng {4, 5, 6} (d) Costas (e) Frank
N U [512, 1920]
100 100
80 80
fc U (fs /6, fs /5)
pcc [%]
pcc [%]
B U (fs /20, fs /10) 60 60
T3, T4
Ng {4, 5, 6} 40 40
N U [512, 1920] 20 20
0 0
-20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10
SNR [dB] SNR [dB]
(f) P1 (g) P2
D. CLASSIFICATION PERFORMANCE OF THE 100 100
pcc [%]
60 60
RADAR WAVEFORMS.
40 40
In addition to the performance comparison of the proposed 20 20
80 80
Table 4 defines the signal parameters and simulation con-
pcc [%]
pcc [%]
60 60
ditions used for the performance evaluation of the proposed 40 40
pcc [%]
60 60
0 0
Table 4 is selected to satisfy the conditions such as Nsc = 4 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10
SNR [dB] SNR [dB]
and Ns (= 2048) > N and to allow much wider variations (l) T3 (m) T4
of the signal than those in subsections V-A, V-B, and V-C. FIGURE 11: Performance comparison of the proposed LWRT with and without
Notice that the bandwidth Bs of the receiver is assumed to the SAT for all of the twelve LPI radar waveforms in [2].
2169-3536 (c) 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2017.2788942, IEEE Access
S.-H. Kong and M. Kim et al.: Automatic LPI Radar Waveform Recognition using CNN
12 VOLUME 4, 2016
2169-3536 (c) 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2017.2788942, IEEE Access
S.-H. Kong and M. Kim et al.: Automatic LPI Radar Waveform Recognition using CNN
[12] K. G. Sheela and S. N. Deepa, “Review on methods to fix number of hid- Seung-Hyun Kong (M’06-SM’16) received the
den neurons in neural networks,” Mathematical Problems in Engineering, B.S. degree in Electronics Engineering from So-
vol. 2013, 2013. gang University, Korea, in 1992, the M.S. degree
[13] N. Levanon and E. Mozeson, Radar signals. John Wiley & Sons, 2004. in Electrical Engineering from Polytechnic Uni-
[14] M. I. Skolnik, “Introduction to radar,” Radar Handbook, vol. 2, 1962. versity, New York, in 1994, and the Ph.D. degree
[15] J. Starzyk and Z. Zhu, “Averaging correlation for c/a code acquisition and in Aeronautics and Astronautics from Stanford
tracking in frequency domain,” in Circuits and Systems, 2001. MWSCAS University, CA, Jan. 2006. From 1997 to 2004, he
2001. Proceedings of the 44th IEEE 2001 Midwest Symposium on, vol. 2.
was with Samsung Electronics Inc.and Nexpilot
IEEE, 2001, pp. 905–908.
Inc., both in Korea, where he worked on devel-
[16] S.-H. Kong, “Sdht for fast detection of weak gnss signals,” IEEE Journal
on Selected Areas in Communications, vol. 33, no. 11, pp. 2366–2378, oping wireless communication system standards
2015. and UMTS mobile positioning technologies. In 2006 and from 2007 to
[17] H.-I. Choi and W. J. Williams, “Improved time-frequency representation 2009, he was a staff engineer at Polaris Wireless Inc., Santa Clara, and at
of multicomponent signals using exponential kernels,” IEEE Transactions Qualcomm Inc. (Corp. R&D), San Diego, respectively, where his research
on Acoustics, Speech, and Signal Processing, vol. 37, no. 6, pp. 862–871, was on Assisted-GNSS and wireless positioning technologies. Since 2010,
1989. he is with Korea Advanced Institute of Science and Technology (KAIST),
[18] P. Flandrin, “Some features of time-frequency representations of multi- where he is currently an associate professor at the CCS Graduate School
component signals,” in Acoustics, Speech, and Signal Processing, IEEE of Green Transportation. He serves as an Editor of IET Radar, Sonar and
International Conference on ICASSP’84., vol. 9. IEEE, 1984, pp. 266– Navigation, and an Associate Editor of IEEE Transactions on Intelligent
269. Transportation Systems and IEEE Access. His research interests include
[19] J. E. Fielding, “Polytime coding as a means of pulse compression,” IEEE signal processing for GNSS, neutral networks for sensing, and vehicular
Transactions on Aerospace and Electronic Systems, vol. 35, no. 2, pp. 716– communication systems.
721, 1999.
[20] J. A. Parker, R. V. Kenyon, and D. E. Troxel, “Comparison of interpolating
methods for image resampling,” IEEE Transactions on medical imaging,
vol. 2, no. 1, pp. 31–39, 1983.
[21] T. M. Lehmann, C. Gonner, and K. Spitzer, “Survey: Interpolation methods
in medical image processing,” IEEE transactions on medical imaging,
vol. 18, no. 11, pp. 1049–1075, 1999.
[22] Y. LeCun, K. Kavukcuoglu, and C. Farabet, “Convolutional networks and
applications in vision,” in Circuits and Systems (ISCAS), Proceedings of
2010 IEEE International Symposium on. IEEE, 2010, pp. 253–256.
[23] D. Ciregan, U. Meier, and J. Schmidhuber, “Multi-column deep neural
networks for image classification,” in Computer Vision and Pattern Recog-
Minjun Kim received the B.S. degree in Elec-
nition (CVPR), 2012 IEEE Conference on. IEEE, 2012, pp. 3642–3649.
tornics Engineering from Chung-Ang University,
[24] P. Y. Simard, D. Steinkraus, J. C. Platt et al., “Best practices for convo-
lutional neural networks applied to visual document analysis.” in ICDAR, Korea, in 2017. He is currently pursuing the M.S.
vol. 3, 2003, pp. 958–962. degree at the CCS Graduate School of Green
[25] C. Poultney, S. Chopra, Y. L. Cun et al., “Efficient learning of sparse repre- Transportation in the Korea Advanced Institute of
sentations with an energy-based model,” in Advances in neural information Science and Technology (KAIST), Korea. His re-
processing systems, 2007, pp. 1137–1144. search interests include signal processing, Radar,
[26] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification Deep learning and V2X for autonomous vehicle.
with deep convolutional neural networks,” in Advances in neural informa-
tion processing systems, 2012, pp. 1097–1105.
[27] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhut-
dinov, “Dropout: a simple way to prevent neural networks from overfit-
ting.” Journal of machine learning research, vol. 15, no. 1, pp. 1929–1958,
2014.
[28] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521,
no. 7553, pp. 436–444, 2015.
VOLUME 4, 2016 13
2169-3536 (c) 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
View publication stats http://www.ieee.org/publications_standards/publications/rights/index.html for more information.