A Novel Physical Layer Spoofing Detection Based
A Novel Physical Layer Spoofing Detection Based
!
sparse decomposition and principal component analysis (P-
X X
CA). Second feature as the middle section variance:
1) Sparse decomposition: Suppose that a received signal 2n−1+i 2n−1+i 2
1
set isY = [y1 , ..., yN ] ∈ Rm×N , N is the number of data F (2)
= d(i) − d(i) (6)
samples and eachyi ∈ Rm×1 , i = 1, 2, ..., N . We want to 2i − 1
k=2n−1−i k=2n−1−i
find a dictionaryD = [d1 , ..., dn ] ∈ Rm×n and sparse vector
X X
The shape imbalance as the third feature:
matrixX = [x1 , ..., xn ] ∈ Rm×n that satisfy
y = Dx =
X
N
dk xk (1)
F (3) =
2n−1
k=2n−1+i
d(i) −
2n−1−i
k=1
d(i) (7)
k=1
where the atoms dk is the column vector of redundant com- In this way, one feature space [F (1) , F (2) , F (3) ] is established.
plete dictionary. Sparse representation model is given by Then, we directly resort to the equal combination (EC) rule
for combination the features,
min x0 s.t. y = Dx (2)
F = μ1 F (1) + μ2 F (2) + μ3 F (3) (8)
where 0 denotes 0 norm, its role is to count the number of 1
nonzero entries. where, μi = mean(F (i) )
This problem is equivalent to a convex optimization problem Up to now, we have already established an integrated feature
under 1 norm. The model is F to measure the difference levels of obtained signal sparse
representation. Thus, each received signal sparse representa-
min x1 s.t. y = Dx (3)
tion xi = [xi1 , ..., xiv ] can be mapped to a single point in the
where 1 denotes 1 norm. feature vector F = [F1 , ..., Fk ] and k ∈ [1, ..., L]. Suppose
Many algorithms can solve this optimization problem, such that the number of signals at level Fk is denoted by nk and
as gradient projection, greedy pursuit algorithms, etc. In this the total number of signals by N = n1 + n2 + ... + nL . Then
P
study, we use the popular orthogonal match pursuit algorithm the probability distribution of each value Fk is
(OMP) to solve the problem and the used redundant over L
complete dictionary is wavelet dictionary. p(Fk ) = nk /N, p(Fk ) ≥ 0, p(Fk ) = 1 (9)
2) Principal component analysis (PCA): After sparse rep- k=1
resentation, we use principal component analysis (PCA) to In this study, we utilize an unsupervised approach to search the
decrease the dimension of the sparse coefficient vector. optimal threshold (denoted by ε). According to [11], choosing
Suppose that the input sparse vector x = [x1 , ..., xn ] is then the optimal threshold is an optimization problem,
¨
transformed into xV = [x1 , ..., xv ] by the following way:
k ∗ = arg max σB
2
(k), k ∈ [1, ..., L]
k
XV = V(x − μx ) (4) ω(k)(1 − ω(k)) > 0 (10)
Subject to
or 0 < ω(k) < 1
P
where μx denotes the mean of the samples, V is the projection
vector. k
The objective of PCA is to choose a set of projection vectors 2
where, ω(k) = p(Fi ) and σB (k) is between-class variance
xV that can represent the original sparse coefficient vector x i=1
with the minimum mean square error. [11]:
2
2 [μT ω(k) − μ(k)]
Thus, the received signals sets is Y = [y1 , ..., yN ], its cor- σB (k) = (11)
P
responding sparse representation is X = [x1 , ..., xN ] and each ω(k)[1 − ω(k)]
element of sparse coefficient is xi = [xi1 , ..., xiv ],i ∈ {1, ..., N } L
P
after PCA processing. N denotes the number of signals and v where, μT is the total mean level ,i.e., μT = Fi p(Fi ), and
i=1
is the vector dimension of sparse coefficient. k
μ(k) is the class mean levels ,i.e., μ(k) = Fi p(Fi ).
B. Automatic representative selection algorithm i=1
In this subsection, we present a new automatic represen- Thus, the optimal threshold k ∗ is selected, and the feature
tative selection algorithm (ARSA) to dichotomize the sparse vector F = [F1 , ..., Fk ] is dichotomized into two classes
representation in two classes, and get the target sparse coeffi- [F1 , ..., Fk∗ ] and [Fk∗ , ..., FL ] by this optimal threshold at level
cients x(a) and x(b) . k ∗ . Then, we choose the middle level as the representative,
Firstly, in order to reflect the distinction of the obtained and the corresponding sparse coefficients is selected, .i.e., the
sparse representation, we extract three quantifiable features sparse coefficients x(0) and x(1) that corresponding to the
according to the following processing. feature F(k∗ +1)/2 and F(k∗ +1)/2 are obtained.
583
2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP)
0.8
Spoofing situations
DŽďŝůĞŶŽĚĞƐ Normal situations
0.7
0.6
0.5
Correlation
ŶƚĞŶŶĂ
0.4
−1 −1
0 50 100 150 0 50 100 150 Fig. 4: The correlation analysis of the experiment .
4sparse representation 4
x 10 x 10 sparse representation
2 2
TABLE I: The composition of the training phase
0 0
−0.05 −0.05
0 10 20 30 0 10 20 30
P X − X Y − Y
Pearson correlation coefficient:
n
r= Ê
P X − X Ê P Y − Y
Fig. 2: Signal processing under normal situations. i i
i=1
(12)
n 2 n 2
4 original signal (a) 4 original signal (b)
i i
1
x 10
1
x 10 i=1 i=1
A. Data acquisition
Fig. 3: Signal processing under spoofing attack situation.
We configure two mobile nodes (homemade hardware based
on IEEE 802.15.4) worn on the chest and the arms as signal
transmitters. And Software defined radio platform (SDR) is
C. Correlation detection
used to emulate the controller. The utilized SDR is Microsoft
When the target sparse coefficients x(0) and x(1) are select- Research Software Radio, also known as Sora. Sora is a
ed, we examine the correlation between the two selected sparse high-performance fully programmable software radio based
coefficients to determine the attacking situation. In this study, on general purpose processors (i.e., CPU) in commodity PC
Pearson correlation coefficient is used to depict the degree of architecture. Fig. 1 shows the mobile nodes and the SDR
correlation. platform respectively in our experiments.
584
2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP)
585