Signal Pattern Recognition Based On Fractal Featur
Signal Pattern Recognition Based On Fractal Featur
Signal Pattern Recognition Based On Fractal Featur
sciences
Article
Signal Pattern Recognition Based on Fractal Features
and Machine Learning
Chang-Ting Shi
College of Computer Science and Technology, Harbin Engineering University, Harbin 150001, China;
[email protected]
Received: 5 July 2018; Accepted: 23 July 2018; Published: 8 August 2018
Keywords: pattern recognition; fractal dimension; feature evaluation; random forest classifier
1. Introduction
As a classic pattern recognition problem, the modulation recognition of communication signals
has many important research prospects and application value. In the military field, it is a precondition
to carry out communication reconnaissance and jamming. Once the signal modulation of the enemy’s
communication system is clarified, the enemy’s signals can be demodulated and communication
information can be obtained. In the civil filed, signal modulation recognition can be used to signal
confirmation, interference identification, spectrum management, and spectrum monitoring. Therefore,
a secure and reliable feature extraction method is needed to effectively recognize the different signal
modulations [1] in a complex environment.
Modulation pattern recognition generally uses statistical pattern recognition methods and
decision-making methods. At present, most of the applications are statistical pattern recognition
methods, which can be divided into two parts: feature extraction and classifier design. Good features
are a key factor to improve the classification accuracy. In recent decades, researchers have proposed
many methods for extracting feature parameters such as instantaneous features [2], fractal features [3],
signal constellation reconstruction [4], etc. As another key factor of signal pattern recognition, classifiers
directly determine the classification results. Machine learning classifiers are widely used in signal
classification, and include the decision tree classifier [5], K-nearest neighbor (KNN) Classifier [6],
support vector machine (SVM) classifier [7], neural network classifier [8], etc. In addition, confusion
matrix, recognition rate, and running time are widely used as evaluation methods. Reference [9]
proposed a novel method called the A-test for evaluating the structural risk of a classifier both
quantitatively and graphically. The structural risk of a classification method was defined as the
defined as the instability of the method with the new test data. In the A-test method, the classification
instability of the method
error percentage of thewith the new
different test data.
values of z isIncalculated
the A-test method,
by usingthe theclassification
balanced Z-folderror percentage
verification
the different values of z is calculated by using the balanced Z-fold verification method.
ofmethod.
InInthis
thispaper,
paper,we we focus
focus mostly
mostly on fractal feature
on fractal feature extraction.
extraction.Different
Differentmethods
methods areare proposed
proposed for
for
estimating the fractal dimension. However, considering that fractal features are limited to
estimating the fractal dimension. However, considering that fractal features are limited to aafew
few
fractal
fractaltypes
types(e.g.,
(e.g.,box
boxdimension),
dimension),the theapplication
applicationofoffractalfractalmethods
methodsfor forthe
thepurpose
purposeofofextracting
extracting
discriminative
discriminative features has not been deeply investigated. The main goal of the present paperthus
features has not been deeply investigated. The main goal of the present paper is to
is thus
help fill this research gap. Therefore, many kinds of fractal dimensions (i.e.,
to help fill this research gap. Therefore, many kinds of fractal dimensions (i.e., box fractal dimension, box fractal dimension,
Katz
Katzfractal
fractaldimension,
dimension,HiguchiHiguchifractal
fractaldimension,
dimension,Petrosian
Petrosianfractal
fractaldimension,
dimension,and andSevcik
Sevcikfractal
fractal
dimension) are extracted in order to provide a comprehensive
dimension) are extracted in order to provide a comprehensive description. description.
InInthe
theclassifier
classifiersection,
section,Back-Propagation
Back-Propagation(BP) (BP)neural
neuralnetwork,
network,grey
greyrelation
relationanalysis,
analysis,random
random
forest,
forest, and K-nearest neighbor are used to classify the modulated signals. BP neural networkshave
and K-nearest neighbor are used to classify the modulated signals. BP neural networks have
powerful
powerfulpatternpatternrecognition
recognitioncapabilities,
capabilities,providing
providingbetter betterrobustness
robustnessand andpotential
potentialfaultfaulttolerance.
tolerance.
Therefore,
Therefore,ititisiseasy easytotoobtain
obtaina ahigher
higherrecognition
recognitionrate. rate.AAkindkindofofelastic
elasticback
backpropagation
propagationneural neural
network is used as classifier in Reference [8]. In Reference [10], an improved
network is used as classifier in Reference [8]. In Reference [10], an improved bee colony algorithm is bee colony algorithm
isapplied
appliedtotodigitaldigitalmodulation
modulationrecognition
recognitionbyby optimizing
optimizing a BP
a BP neural
neural network.
network. GreyGrey correlation
correlation can
can be used to analyze and describe the attributes and characteristics of the
be used to analyze and describe the attributes and characteristics of the model, which is suitable for model, which is suitable
for mathematical
mathematical statistical
statistical analysis.In In
analysis. Reference[11],
Reference [11],thethebasic
basic grey
grey relation
relation model
model isis proposed.
proposed.
Random
Random forest has high prediction accuracy, and good tolerance to outliers andnoise.
forest has high prediction accuracy, and good tolerance to outliers and noise.ItItisisalso
alsonotnot
prone to overfitting. In References [12,13], random forest is used for dimensionality
prone to overfitting. In References [12,13], random forest is used for dimensionality reduction in low reduction in
low signal-to-noise ratio (SNR). In Reference [14], the information entropy
signal-to-noise ratio (SNR). In Reference [14], the information entropy is used as the modulation is used as the modulation
feature,
feature,and andrandom
randomforest forestisisused
usedasasthetheclassifier.
classifier.AAhigh highrecognition
recognitionraterateisisobtained
obtainedin inlow
lowSNR.SNR.
The K-nearest neighbor algorithm has the advantages of simple
The K-nearest neighbor algorithm has the advantages of simple calculation and high recognitioncalculation and high recognition
efficiency
efficiency[6].[6]. A KNN–SVM
A KNN–SVM jointjoint
classifier is proposed
classifier in Reference
is proposed [7], which[7],
in Reference canwhich
reducecan the sensitivity
reduce the
tosensitivity
the choicetoofthe kernel function parameters and ease the difficulty of parameter
choice of kernel function parameters and ease the difficulty of parameter selection. selection.
Figure
Figure11shows showsthe theprocess
processofofsignal
signalmodulation
modulationrecognition
recognitionininthis thispaper.
paper.Firstly,
Firstly,five
fivefractal
fractal
dimensions
dimensions are extracted for the eight different kinds of digital modulated signals. Then,the
are extracted for the eight different kinds of digital modulated signals. Then, thenoise
noise
robustness, data distribution, and computational complexity are evaluated.
robustness, data distribution, and computational complexity are evaluated. After that, four different After that, four different
kinds
kindsofof classifiers are introduced.
classifiers are introduced. Finally, the classification
Finally, performance
the classification is given through
performance is giventhethrough
confusion the
matrix
confusionand recognition rate.
matrix and recognition rate.
Katz Dimension
Higuchi Dimension
Petrosian Dimension
Sevcik Dimension
BP Neural Network
2. Related Work
In this section, we provide a description of some works related to fractal feature extraction.
Fractal-based pattern recognition has been paid increasing attention, because it can effectively deal
with feature information in a complex electromagnetic environment. One of the first studies of fractal
theory [15,16] was conducted by B.B. Mandelbrot in 1975, as a branch of modern mathematical theory.
The fractal dimension is the main parameter in fractal theory. It quantitatively describes the complexity
of the fractal set. As a kind of time series, the communication modulation signal can be effectively
described by the fractal dimension. There are many definitions of and measurement methods for
fractal dimension [17–20].
The common one-dimensional fractal dimensions are box dimension, Hausdorff–Besicovitch
dimension, Sevcik fractal dimension, Katz dimension, Higuchi dimension, and Petrosian dimension.
Among them, the box dimension is easy to calculate, and a box with different side lengths is used to
describe the change of the signal waveform. Smaller side lengths of the box lead to a longer calculation
time, but the recognition rate of the signal will increase. In References [21,22], the box dimension was
used to characterize the signal characteristics. The Hausdorff–Besicovitch dimension [23,24] was the
basic fractal dimension. Hausdorff–Besicovitch and box dimensions were considered to minimize the
antenna size, holding a high radiation efficiency in Reference [25]. This confirmed the importance
of Hausdorff–Besicovitch Dimension, but also indicated that it has a high computational complexity,
making it difficult to realize in practice. The Sevcik fractal dimension was proposed to calculate the
fractal dimension of waveforms in Reference [26]. This method could be used to quickly estimate
the Hausdorff–Besicovitch dimension of waveforms and measure the complexity and randomness of
waveforms. Petrosian proposed a fast method to estimate the fractal dimension of a finite sequence [27],
which converts the data to binary sequence before estimating the fractal dimension from time series.
Tricot compared the estimation accuracy delivered by Katz’s and Higuchi’s methods on four synthetic
fractal waveforms. The results indicated that Katz’s method invariably under-estimated the true
dimension, but Higuchi’s method was more accurate in estimating the fractal dimensions.
In addition, in Reference [28], the definition and mathematical analysis of the Weierstrass–
Mandelbrot fractal function were described. The application of fractal functions in engineering was
discussed in Reference [29], including fractal modeling and calculation. In Reference [30], the effect of
multiple fractals on a digital communication signal was studied under different noise distributions.
The simulation results showed that multiple fractals could effectively extract the feature of FSK
(Frequency Shift Keying) signals under different noise distributions. Additionally, the use of multiple
fractals was more effective in depicting subtle features [31–33]. However, the problem of large
complexity in multifractal calculation could not be avoided [34,35].
Therefore, how to select the proper fractal dimension algorithm according to the signal complexity,
and then accurately classify the different communication modulation signals are the key problems in
this paper. Therefore, we conducted a systematic empirical study on how to choose fractal dimension
features for classifying communication modulation signals.
3. Fractal Dimension
nonempty emergency clusters of X. For a box with a side length of ε, the minimum number N ( A, ε) of
boxes required to cover A is shown as follows:
( )
M
N ( A, ε) = M:A⊂ ∑ N ( xi , ε ) , (1)
i =1
where x1 , x2 , · · · x M are different points of X. When ε approaches 0, the box dimension is shown
as follows:
InN ( A, ε)
Db = lim . (2)
ε→0 In (1/ε )
As indicated above, the box dimension merely represents the geometric dimension of the signal,
but does not reflect the density distribution in the planar space.
log( N )
D= , (3)
log( N ) + log( Ld )
where L is the length of the waveform, and d is the maximum distance between the initial point ( x1 , y1 )
to the other points. They can be defined as follows:
N −2 q
L= ∑ ( y i +1 − y i )2 + ( x i +1 − x i )2 , (4)
i =0
q
d = max ( x i − x1 )2 − ( y i − y1 )2 . (5)
[( N −m)/k]
∑ | x (m + ik) − x (m + (i − 1) · k)| · ( N − 1)
i =1
Lm (k ) = . (7)
[( N − m)/k] · k
Therefore, the total average length of the discrete time signal sequence is shown as follows:
k
L(k) = ∑ L m ( k ). (8)
m =1
Then, find the total number of adjacent symbol changes in the sequence:
N −2
∑ z i +1 − z i .
N∆ = (11)
i =1
2
ln( L) + ln(2)
D = 1+ , (14)
ln[2 × ( N − 1)]
N −2
r
2 2
L= ∑ yi∗+1 − yi∗ + xi∗+1 − xi∗ . (15)
i =0
4. Classifier Algorithm
4. For i = 1 to K
Calculate the weight of characteristic item: wei ( x, Ct ) = ∑ sim( x, di )y(di , Ct )
5. weimax = max(wei)
6. Label out = find (wei == weimax )
7. End
Start
Feature samples of
data to be identified
the maximum
likelihood of Class
end
5. Simulation Results
1.75 1.7
2ASK 2ASK
4ASK 4ASK
2FSK 1.6 2FSK
4FSK 4FSK
8FSK 8FSK
1.5
BPSK BPSK
1.7
16QAM 16QAM
Katz Dimension
Box Dimension
1.3
1.65
1.2
1.1
1
1.6 -5 0 5 10
-5 0 5 10
SNR/dB
SNR/dB
16QAM
Higuchi Dimension
32QAM 1.01
2.05
2 1.008 2ASK
4ASK
1.95
2FSK
1.006
1.9 4FSK
8FSK
1.85 1.004 BPSK
16QAM
1.8 32QAM
-5 0 5 10 1.002
-5 0 5 10
SNR/dB
SNR/dB
1.75 16QAM
32QAM
1.7
1.65
1.6
1.55
1.5
-5 0 5 10
SNR/dB
According to Figure 3, the differences among the eight different signals were obvious.
According
Meanwhile, in tothe
Figure 3, the
whole SNRdifferences
range, theamong
featurethe eight different
distribution signalsfluctuating,
was slightly were obvious. Meanwhile,
which means
in the
thatwhole SNRfeature
the fractal range, was
the feature distribution
not sensitive wasenvironment.
to the noise slightly fluctuating,
Comparingwhich
themeans that the
five different fractal
fractal
feature wasthe
features, notdata
sensitive to thewas
distribution noise
alsoenvironment.
quite different.Comparing the five differed
The Katz dimension differentthe
fractal features,
most among
thethe five
data different features.
distribution was alsoThequite
box dimension couldKatz
different. The distinguish
dimensionthe eight signals
differed theeffectively.
most among Katz and
the five
Sevcik dimensions could better distinguish 4ASK, 16QAM, and 32QAM. Petrosian dimension
different features. The box dimension could distinguish the eight signals effectively. Katz and Sevcik could
better distinguish
dimensions the 2FSK
could better and 16QAM
distinguish 4ASK,signals.
16QAM,Theand
Higuchi
32QAM. fractal dimension
Petrosian was the worst.
dimension The
could better
aliasing among
distinguish signals
the 2FSK and was
16QAMserious.
signals. The Higuchi fractal dimension was the worst. The aliasing
One of was
among signals the serious.
main difficulties in automatic modulation recognition is the problem of poor
recognition rate in a large dynamic SNR environment. In practical applications, the SNR changes
rapidly with the environment and it is difficult to maintain stability. Therefore, high requirements
Appl. Sci. 2018, 8, 1327 10 of 15
One of the main difficulties in automatic modulation recognition is the problem of poor recognition
rate in a large dynamic SNR environment. In practical applications, the SNR changes rapidly with the
environment and it is difficult to maintain stability. Therefore, high requirements are placed on the
adaptability of large dynamic SNR. For automatic modulation recognition based on feature extraction
and machine learning, the noise robustness of the classifier mainly depends on the selection of features.
From Figure 3, we can see that the fractal feature under different SNRs showed low noise sensitivity.
Therefore, an algorithm for analyzing the overall anti-noise performance is provided as follows.
Calculating a feature parameter C for the ith modulation method, the zero-center normalized
statistical variance Var (Ci ) of the corresponding parameter values at different SNR is shown as follows:
1 K
K k∑
Var (Ci ) = [Ci (k) − E(Ci (k))]2 . (16)
=1
K is the number of samples for each feature. Ci (k ) is the zero-center normalized statistical value of
parameter value C, which corresponds to the feature value ci (k) for the ith modulation under different
SNRs. Ci (k) can be calculated by the following equation:
ci ( k )
Ci (k) =
K
− 1, k = 0, 1, 2, . . . K (17)
1
K ∑ ci ( k )
k=1
Then, the sum of the zero-center normalized statistical variance Var (Ci ) is calculated with n
different modulation modes as SVar (C ):
n
SVar (C ) = ∑ Var(Ci ) (18)
i=1
Table 1 shows the anti-noise evaluation results of the five different fractal features. The SVar (C )
of the five different fractal features are all within ×10−3 , which reflects the superior noise immunity
of fractal features. Therefore, fractal features have an advantage in overcoming the problem of poor
robustness in traditional automatic modulation recognition.
Furthermore, in order to show the data distribution of the five different features more clearly,
the box diagram distribution of these features is given at SNR = 0 dB. A box diagram is a statistical
method to reflect the discrete data distribution. The box diagram consists of five data points: minimum,
lower quartile, median, upper quartile, and maximum. The lower quartile, median, and upper quartile
make up a “box with compartments”. An extension line is established between the quartiles and the
maximum value. There is always a wide variety of “dirty data” in real data, which are also called
“outliers”. Outliers are picked up individually. Therefore, the outliers of a dataset can be recognized
intuitively, and the data dispersion and bias can be judged. The box diagram distribution is shown in
Figure 4.
According to Figure 4, the data distribution of the Sevcik dimension had the biggest difference,
which means it had the best ability to distinguish the eight different kinds of communication
modulation signals. The data distribution of the box dimension and Katz dimension were also
ideal, where the intra-class distribution of the Katz dimension was the most concentrated and the
aggregation degree was the best. The data ranges of Higuchi dimension and Petrosian dimension were
small, and the data distribution between different signals was similar.
Appl. Sci. 2018, 8, 1327 11 of 15
Appl. Sci. 2018, 8, x FOR PEER REVIEW 11 of 15
Box Dimension
Computational complexity is also very important for evaluating features. Therefore, the running
Computational complexity is also very important for evaluating features. Therefore, the running
times of the five fractal features were calculated with a DELL Inspiron 14 laptop. The CPU was an
times of the five fractal features were calculated with a DELL Inspiron 14 laptop. The CPU was an
Intel(R) Core(TM) i7-4510U, and the memory was 8 GB. The results are shown in Table 2.
Intel(R) Core(TM) i7-4510U, and the memory was 8 GB. The results are shown in Table 2.
Table 2. Characteristic complexity evaluation.
Table 2. Characteristic complexity evaluation.
Type Box Katz Higuchi Petrosian Sevcik
Type
Runtime/s Box
1.92 Katz
1.66 Higuchi
82.199 Petrosian
2.83 Sevcik
1.26
Runtime/s 1.92 1.66 82.199 2.83 1.26
The Table 2 indicates that the running time of the Sevcik dimension was the shortest, meaning
that the Sevcik dimension had the lowest computational complexity. The running time of the Higuchi
The Table 2 indicates that the running time of the Sevcik dimension was the shortest, meaning
dimension was more than 30 times longer than the others. Therefore, the computational complexity
that the Sevcik dimension had the lowest computational complexity. The running time of the Higuchi
of the Higuchi dimension was much higher than that of the others.
dimension was more than 30 times longer than the others. Therefore, the computational complexity of
the Higuchi dimension was much higher than that of the others.
Appl. Sci. 2018, 8, x FOR PEER REVIEW 12 of 15
Appl. Sci. 2018, 8, 1327 12 of 15
The simulation results indicated that the BP neural network and random forest classifiers
The simulation results indicated that the BP neural network and random forest classifiers
performed better at SNR = 0 dB. For eight different kinds of communication modulation signals, the
performed better at SNR = 0 dB. For eight different kinds of communication modulation signals,
4ASK, 2FSK, 16QAM, and 32QAM had better recognition rates than the others. The recognition rate
the 4ASK, 2FSK, 16QAM, and 32QAM had better recognition rates than the others. The recognition
of 8FSK was the worst. The classification errors of 8FSK were the largest among the eight different
rate of 8FSK was the worst. The classification errors of 8FSK were the largest among the eight
signals.
different signals.
Similarly, in order to analyze the recognition effect under high SNR, the confusion matrices of
Similarly, in order to analyze the recognition effect under high SNR, the confusion matrices of
four classifiers under 10 dB are given in Figure 6.
four classifiers
As shownunder 10 dB
in Figure 6, are giventhe
among in eight
Figure 6.
different signals, 8FSK was again the worst recognition
As shown in Figure 6, among the eight different
rate. At 10 dB, the recognition rate of the random forest signals, 8FSK
classifier waswastheagain the BP
best. The worst recognition
neural network
rate. At 10 dB, the recognition rate of the random forest classifier was the best. The
had a great ability to distinguish most signals, but the recognition rate of 8FSK was not very good,BP neural network
had a great ability
the number to distinguish
of correct most
classifications was signals, but the recognition
0. The recognition rategrey
rates of the of 8FSK was not
correlation andvery good,
K-nearest
the number of correct classifications was 0. The recognition rates of the grey correlation
neighbor classifiers were poor, but the K-nearest neighbor classifier was better than the grey and K-nearest
neighbor
correlationclassifiers were
classifier. Onpoor,
thebut the K-nearest
whole, compared neighbor
with 0classifier
dB, the was better than
recognition thewas
rate greysignificantly
correlation
classifier. On the
improved at 10 dB SNR.whole, compared with 0 dB, the recognition rate was significantly improved at
10 dB SNR.
Appl. Sci. 2018, 8, 1327 13 of 15
Appl.Sci.
Appl. Sci.2018,
2018,8,8,xxFOR
FORPEER
PEERREVIEW
REVIEW 13of
13 of15
15
(a)BP
(a) BP (b)GRA
(b) GRA
(c)KNN
(c) KNN (d)RF
(d) RF
Figure6.6.6.Confusion
Figure
Figure Confusionmatrices
Confusion matricesat
at10
10dB
10 dBSNR.
dB SNR.
SNR.
Figure 77 shows
Figure shows the
the overall
overall recognition
recognition rate
rate curves
curves under
under different
different SNRs.
SNRs. ItIt indicates
indicates that
that the
the
Figure 7 shows the overall recognition rate curves under different SNRs. It indicates that the
recognitionrate
recognition rateof
ofthe
thefour
fourdifferent
differentclassifiers
classifierswas
wasabout
about45%
45%at atSNR
SNR==−5 −5dB.
dB.With
Withincreasing
increasingSNR,SNR,
recognition rate of the four different classifiers was about 45% at SNR = −5 dB. With increasing SNR,
therecognition
the recognitionraterateincreased
increasedcontinuously.
continuously.ItItwas wasstable
stableat
atSNR
SNR==88dB.dB.The
TheBP BPneural
neuralnetwork,
network,greygrey
the recognition rate increased continuously. It was stable at SNR = 8 dB. The BP neural network,
relational, and
relational, and K-nearest
K-nearest neighbor
neighbor classifiers
classifiers stabilized
stabilized atat 85%,
85%, but
but the
the random
random forest
forest classifier
classifier
grey relational, and K-nearest neighbor classifiers stabilized at 85%, but the random forest classifier
stabilizedat
stabilized at96%.
96%.InInaddition,
addition,the
therecognition
recognitionrate
rateofofthe
therandom
randomforest
forestclassifier
classifierwas
wasbetter
betterthan
thanthethe
stabilized
othersininat 96%.
the In addition,
whole SNRrange.theTherefore,
range. recognition rate of the random
therecognition
recognition forestofclassifier
performance therandomwasforest
random betterclassifier
than the
others the whole SNR Therefore, the performance of the forest classifier
others in the whole SNR range. Therefore, the recognition performance of the random forest classifier
wasoptimal.
optimal.
was
was optimal.
Figure7.7.7.Recognition
Figure
Figure Recognitionrate
Recognition ratecurve
curveunder
underdifferent
under differentSNRs.
different SNRs.
SNRs.
Appl. Sci. 2018, 8, 1327 14 of 15
6. Conclusions
In this paper, a systematic empirical study of fractal theory is used to extract the fractal features
of eight different communication modulation signals. Five kinds of fractal features are used, including
box fractal dimension, Katz fractal dimension, Higuchi fractal dimension, Petrosian fractal dimension,
and Sevcik fractal dimension. Additionally, the evaluation methods of five fractal features are proposed.
The noise robustness is analyzed by an anti-noise function. The data distribution is calculated using
box diagrams, and the computational complexity is evaluated by the running time. Finally, BP neural
network, grey relation analysis, random forest, and K-nearest neighbor classifiers are used to classify
the communication modulation signals. The experimental results showed that the recognition rate of
random forest could reach 96% at SNR = 10 dB. In addition, the recognition rate of the random forest
classifier was superior to the others in the whole SNR range. Therefore, the classification performance
of the random forest classifier was optimal.
References
1. Liu, K.; Zhang, X.; Chen, Y.Q. Extraction of Coal and Gangue Geometric Features with Multifractal
Detrending Fluctuation Analysis. Appl. Sci. 2018, 8, 463. [CrossRef]
2. Dahap, B.I.; Liao, H.S. Advanced algorithm for automatic modulation recognition for analogue & digital
signals. In Proceedings of the International Conference on Computing, Control, Networking, Electronics
and Embedded Systems Engineering, Khartoum, Sudan, 7–9 September 2015; pp. 32–36.
3. Shen, Y.; Liu, X.; Yuan, X. Fractal Dimension of Irregular Region of Interest Application to Corn Phenology
Characterization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1402–1412. [CrossRef]
4. Zhang, J.; Wang, X.; Yang, X. A method of constellation blind detection for spectrum efficiency enhancement.
In Proceedings of the International Conference on Advanced Communication Technology, Pyeongchang,
Korea, 31 January–3 February 2016.
5. Wei, D.; Wei, J. A MapReduce Implementation of C4.5 Decision Tree Algorithm. Int. J. Database Theory Appl.
2014, 7, 49–60.
6. Jiang, L.B.; Liu, Z.Z.; Wen, D.G. A New Recognition Algorithm of Digital Modulation Signals. Microelectron.
Comput. 2013, 44, 112–123.
7. Mehmood, R.M.; Lee, H.J. Emotion classification of EEG brain signal using SVM and KNN. In Proceedings
of the IEEE International Conference on Multimedia & Expo Workshops, Turin, Italy, 29 June–3 July 2015;
pp. 1–5.
8. Jia, Y.U.; Chen, Y. Digital modulation recognition method based on BP neural network. Transducer Microsyst.
Technol. 2012, 5, 7.
9. Gharehbaghi, A.; Lindén, M. A Deep Machine Learning Method for Classifying Cyclic Time Series of
Biological Signals Using Time-Growing Neural Network. IEEE Trans. Neural Netw. Learn. Syst. 2017.
[CrossRef] [PubMed]
10. Shi, X.; Liu, Y. Modified Artificial Bee Colony Algorithm Optimizing BP Neural Network and Its Application
in the Digital Modulation Recognition. J. Jiangnan Univ. 2015, 4, 4.
11. Deng, J.L. Introduction to Grey system theory. J. Grey Syst. 1989, 1, 1–24.
12. Wang, X.; Wang, J.K.; Liu, Z.G.; Wang, B.; Hu, X. Spectrum Sensing for Cognitive Networks Based on
Dimensionality Reduction and Random Forest. Int. J. Signal Process. Image Process. 2014, 7, 443–452.
[CrossRef]
13. Wang, X.; Gao, Z.; Fang, Y.; Yuan, S.; Zhao, H.; Gong, W.; Qiu, M.; Liu, Q. A Signal Modulation Type
Recognition Method Based on Kernel PCA and Random Forest in Cognitive Network. In Proceedings of the
International Conference on Intelligent Computing, Taiyuan, China, 3–6 August 2014; pp. 522–528.
14. Zhang, Z.; Li, Y.; Zhu, X.; Lin, Y. A Method for Modulation Recognition Based on Entropy Features and
Random Forest. In Proceedings of the IEEE International Conference on Software Quality, Reliability and
Security Companion, Prague, Czech Republic, 25–29 July 2017.
Appl. Sci. 2018, 8, 1327 15 of 15
15. Zhang, Y.D.; Chen, X.Q.; Zhan, T.M.; Jiao, Z.Q.; Sun, Y.; Chen, Z.M.; Yao, Y.; Fang, L.T.; Lv, Y.D.;
Wang, S.H. Fractal Dimension Estimation for Developing Pathological Brain Detection System Based on
Minkowski-Bouligand Method. IEEE Access 2016, 4, 5937–5947. [CrossRef]
16. Sanchez, J.P.; Alegria, O.C.; Rodriguez, M.V.; Abeyro, J.A.; Almaraz, J.R.; Gonzalez, A.D. Detection of ULF
Geoma-gnetic Anomalies Associated to Seismic Activity Using EMD Method and Fractal Dimension Theory.
IEEE Latin Am. Trans. 2017, 15, 197–205. [CrossRef]
17. Zhao, N.; Yu, F.R.; Sun, H.; Li, M. Adaptive Power Allocation Schemes for Spectrum Sharing in
Interference-Alignment-Based Cognitive Radio Networks. IEEE Trans. Veh. Technol. 2016, 65, 3700–3714.
[CrossRef]
18. Lin, Y.; Zhu, X.; Zheng, Z.; Dou, Z.; Zhou, R. The individual identification method of wireless device based
on dimensionality reduction and machine learning. J. Supercomput. 2017, 74, 1–18. [CrossRef]
19. Shi, C.; Dou, Z.; Lin, Y.; Li, W. Dynamic threshold-setting for RF-powered cognitive radio networks in
non-Gaussian noise. Phys. Commun. 2018, 27, 99–105. [CrossRef]
20. Yang, Z.; Deng, J.; Nallanathan, A. Moving Target Recognition Based on Transfer Learning and
Three-Dimensional Over-Complete Dictionary. IEEE Sens. J. 2016, 16, 5671–5678. [CrossRef]
21. Prieto, M.D.; Espinosa, A.G.; Ruiz, J.R.; Urresty, J.C.; Ortega, J.A. Feature Extraction of Demag-netization
Faults in Permanent-Magnet Synchronous Motors Based on Box-Counting Fractal Dimension. IEEE Trans.
Ind. Electron. 2011, 58, 1594–1605. [CrossRef]
22. Zhou, J.T.; Xu, X.; Pan, S.J.; Tsang, I.W.; Qing, Z.; Goh, R.S.M. Transfer Hashing with Privileged Information.
arXiv, 2016, arXiv:1605.04034.
23. Vanthana, P.S.; Muthukumar, A. Iris authentication using Gray Level Co-occurrence Matrix and Hausdorff
Dimension. In Proceedings of the International Conference on Computer Communication and Informatics,
Coimbatore, India, 8–10 January 2015.
24. Gui, Y. Hausdorff Dimension Spectrum of Self-Affine Carpets Indexed by Nonlinear Fibre-Coding.
In Proceedings of the International Workshop on Chaos-Fractals Theories and Applications, Liaoning, China,
6–8 November 2009; pp. 382–386.
25. Guariglia, E. Entropy and Fractal Antennas. Entropy 2016, 18, 84. [CrossRef]
26. Sevcik, C. A Procedure to Estimate the Fractal Dimension of Waveforms. Complex. Int. 1998, 5, 1–19.
27. Petrosian, A. Kolmogorov Complexity of Finite Sequences and Recognition of Different Preictal EEG Patterns.
In Proceedings of the Computer-Based Medical Systems, Lubbock, TX, USA, 9–10 June 1995; pp. 212–217.
28. Berry, M.V.; Lewis, Z.V.; Nye, J.F. On the Weierstrass-Mandelbrot fractal function. Proc. R. Soc. Lond. 1980,
370, 459–484. [CrossRef]
29. Guariglia, E. Spectral Analysis of the Weierstrass-Mandelbrot Function. In Proceedings of the 2nd
International Multidisciplinary Conference on Computer and Energy Science, Split, Croatia, 12–14 July 2017.
30. Wang, H.; Li, J.; Guo, L.; Dou, Z.; Lin, Y.; Zhou, R. Fractal Complexity-based Feature Extra-ction Algorithm
of Communication Signals. Fractals 2017, 25, 1740008. [CrossRef]
31. Liu, S.; Pan, Z.; Cheng, X. A Novel Fast Fractal Image Compression Method based on Distance Clustering in
High Dimensional Sphere Surface. Fractals 2017, 25, 1740004. [CrossRef]
32. Lin, Y.; Wang, C.; Wang, J.; Dou, Z. A Novel Dynamic Spectrum Access Framework Based on Reinforcement
Learning for Cognitive Radio Sensor Networks. Sensors 2016, 16, 1675. [CrossRef] [PubMed]
33. Liu, S.; Pan, Z.; Fu, W.; Cheng, X. Fractal generation method based on asymptote family of generalized
Mandelbrot set and its application. J. Nonlinear Sci. Appl. 2017, 10, 1148–1161. [CrossRef]
34. Liu, S. Special issue on advanced fractal computing theorem and application. Fractals 2017, 25, 1740007.
35. Liu, S.; Zhang, Z.; Qi, L.; Ma, M. A Fractal Image Encoding Method based on Statistical Loss used in
Agricultural Image Compression. Multimed. Tools Appl. 2016, 75, 15525–15536. [CrossRef]
© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).