0% found this document useful (0 votes)
7 views13 pages

Feed-Forward Neural Network Optimized by Hybridization of PSO and ABC For Abnormal Brain Detection

Uploaded by

vijay senthil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views13 pages

Feed-Forward Neural Network Optimized by Hybridization of PSO and ABC For Abnormal Brain Detection

Uploaded by

vijay senthil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Feed-Forward Neural Network Optimized by

Hybridization of PSO and ABC for Abnormal Brain


Detection
Shuihua Wang,1,2,3 Yudong Zhang,1,3 Zhengchao Dong,4 Sidan Du,2 Genlin Ji,1 Jie Yan,5
Jiquan Yang,3 Qiong Wang,3 Chunmei Feng,3 Preetha Phillips6
1
School of Computer Science and Technology, Nanjing Normal University, Nanjing, Jiangsu
210023, China
2
School of Electronic Science and Engineering, Nanjing University, Nanjing, Jiangsu 210046,
China
3
Jiangsu Key Laboratory of 3D Printing Equipment and Manufacturing, Nanjing, Jiangsu 210042,
China
4
Translational Imaging Division and MRI Unit, Columbia University and New York State Psychiatric
Institute, New York, NY 10032
5
Department of Applied Physics, Stanford University, Stanford, CA 94305
6
School of Natural Sciences and Mathematics, Shepherd University, Shepherdstown, WV 25443

Received 20 February 2015; revised 19 March 2015; accepted 7 April 2015

ABSTRACT: Automated and accurate classification of MR brain achieved perfect classification on Dataset-66 and Dataset-160. For
images is of crucially importance for medical analysis and interpreta- Dataset-255, the 10 repetition achieved average sensitivity of
tion. We proposed a novel automatic classification system based on 99.37%, average specificity of 100.00%, average precision of
particle swarm optimization (PSO) and artificial bee colony (ABC), 100.00%, and average accuracy of 99.45%. The offline learning cost
with the aim of distinguishing abnormal brains from normal brains in 219.077 s for Dataset-255, and merely 0.016 s for online prediction.
MRI scanning. The proposed method used stationary wavelet trans- Thus, the proposed SWT 1 PCA 1 HPA-FNN method excelled exist-
form (SWT) to extract features from MR brain images. SWT is ing methods. It can be applied to practical use. C 2015 Wiley Peri-
V
translation-invariant and performed well even the image suffered odicals, Inc. Int J Imaging Syst Technol, 25, 153–164, 2015; Published online in
from slight translation. Next, principal component analysis (PCA) was Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/ima.22132
harnessed to reduce the SWT coefficients. Based on three different
Key words: particle swarm optimization; artificial bee colony; hybrid-
hybridization methods of PSO and ABC, we proposed three new var-
ization; magnetic resonance imaging; feed-forward neural network;
iants of feed-forward neural network (FNN), consisting of IABAP-
stationary wavelet transform; principle component analysis; pattern
FNN, ABC-SPSO-FNN, and HPA-FNN. The 10 runs of K-fold cross
recognition; classification
validation result showed the proposed HPA-FNN was superior to not
only other two proposed classifiers but also existing state-of-the-art
methods in terms of classification accuracy. In addition, the method I. INTRODUCTION
Magnetic resonance imaging (MRI) is a low-risk, fast, noninvasive
imaging technique that produces high quality images of the anatomical
Correspondence to: Yudong Zhang and Sidan Du; e-mail: zhangyudong@njnu.
edu.cn structures of the human body, especially in the brain, and provides rich
Grant sponsors: NSFC (610011024, 61273243, 51407095), Program of Natural information for clinical diagnosis and biomedical research (Goh et al.,
Science Research of Jiangsu Higher Education Institutions (13KJB460011, 2014). Soft tissue structures are clearer and more detailed with MRI
14KJB520021), Jiangsu Key Laboratory of 3D Printing Equipment and Manufacturing
(BM2013006), Key Supporting Science and Technology Program (Industry) of Jiangsu than other imaging modalities (Zhang et al., 2013). Numerous
Province (BE2012201, BE2014009-3, BE2013012-2), Special Funds for Scientific and researches are carried out, trying not only to improve the magnetic res-
Technological Achievement Transformation Project in Jiangsu Province
(BA2013058), Nanjing Normal University Research Foundation for Talented Scholars
onance (MR) image quality (Dong et al., 2014), but also to seeking
(2013119XGQ0061, 2014119XGQ0080). novel methods for easier and quicker pre-clinical diagnosis from MR

C 2015 Wiley Periodicals, Inc.


V
images (Sun et al., 2014). This study focuses on the latter, that is, II. BACKGROUND ON MR BRAIN IMAGE
abnormal brain detection. CLASSIFICATION
The problem arises that existing manual methods of analysis and In the last decade, various methods were proposed for brain MR
interpretation are tedious, time consuming, costly, and irreproduci- image classification. Chaplot et al. (2006) used the approximation
ble, due to the huge amount of imaging data. This necessitates the coefficients obtained by discrete wavelet transform (DWT), and
requirement to develop automatic computer-aided diagnosis (CAD) employed the self-organizing map (SOM) neural network and sup-
tool. One of the most discriminant features of a normal brain was its port vector machine (SVM). Maitra and Chatterjee (2006) employed
symmetry that was obvious in either the axial or the coronal direc- the Slantlet transform, which is an improved version of DWT. Their
tion. However, the asymmetry along an axial MR brain suggested a feature vector of each image is created by considering the magni-
pathological brain. This symmetry–asymmetry can be modelled by tudes of Slantlet transform outputs corresponding to six spatial posi-
various image processing techniques, and can be used to classify nor- tions chosen according to a specific logic. Then, they used the
mal and abnormal brain MR images (Maji et al., 2008). common back-propagation neural network (BPNN). El-Dahshan
Recently, various automatic CAD methods of abnormal brain et al. (2010) extracted the approximation and detail coefficients of
detection were developed. Roughly, they can be divided into two cat- three-level DWT, reduced the coefficients by principal component
egories (Aswathy et al., 2014): supervised classification and unsuper- analysis (PCA), and used feed-forward back-propagation artificial
vised classification. While both of these methods achieved neural network (FP-ANN) and K-nearest neighbor (KNN) classifiers.
satisfactory results, supervised classification performs better than Zhang et al. (2011b) proposed using DWT for feature extraction,
unsupervised classification in terms of classification accuracy (suc- PCA for feature reduction, and FNN with scaled chaotic artificial
cessful classification rate). In this study, we would like to develop a bee colony (SCABC) as classifier. Based on it, Zhang et al. (2011a)
more accurate and faster CAD method. suggested to replace SCABC with scaled conjugate gradient (SCG)
Traditionally, discrete wavelet transform (DWT) is commonly method. Ramasamy and Anandhakumar (2011) used fast-Fourier-
used as an effective tool for feature extraction of 2D MR brain image transform based expectation-maximization Gaussian mixture model
because it allows for the analysis of images at various levels of reso- for brain tissue classification of MR images. Zhang and Wu (2012)
lution. However, DWT is translation-invariant, that is, DWT coeffi- proposed to use kernel SVM (kSVM), and they suggested three new
cients may change remarkably when the brain MR image was only kernels as homogeneous polynomial, inhomogeneous polynomial,
slightly shifted because of the dithering of the subject. Therefore, we and Gaussian radial basis. Saritha et al. (2013) proposed a novel fea-
proposed to replace DWT with stationary wavelet transform (SWT). ture of wavelet-entropy (WE), and employed spider-web plots
Next, the feed-forward neural network (FNN) (Zhang and Wu, (SWP) to further reduce features. Afterwards, they used the probabil-
2008) was chosen as the classifier because it had advantages that can istic neural network (PNN). Das et al. (2013) proposed to use Ripplet
classify nonlinear separable patterns and approximate an arbitrary transform (RT) 1 PCA 1 least square SVM (LS-SVM), and the 5 3
continuous function. However, to find the optimal parameters of 5 CV shows high classification accuracies. Kalbkhani et al. (2013)
FNN is a difficult task because the search algorithms are easily modelled the detail coefficients of two-level DWT by generalized
trapped in local extrema. Recently, there have been many algorithms autoregressive conditional heteroscedasticity (GARCH) statistical
available to train the FNN, such as back-propagation (BP) algorithm, model, and the parameters of GARCH model are considered as the
genetic algorithm (GA), elite genetic algorithm with migration primary feature vector. Their classifier was chosen as KNN and
(EGAM) (Kelleg€oz et al., 2010), simulating annealing (SA), particle SVM models. Zhang et al. (2014b) presented a classification method
swarm optimization (PSO) (Kiranyaz et al., 2009), and artificial bee of Alzheimer’s disease based on structural MR images by support
colony (ABC). Unfortunately, the BP, GA, SA, PSO, and ABC algo- vector machine Decision Tree (SVMDT).
rithms all demand expensive computational costs, and can still be All those methods achieved satisfying results, nevertheless, most
easily trapped into the local best, hence would probably end up with- methods suffered from three points. (i) They used DWT commonly,
out finding the optimal weights of the FNN. which were translation-variant, namely, the wavelet coefficients
Considering ABC has effective search ability while its explora- behaved unpredictably under translation of the input signal. (ii) The
tion ability is poor, and PSO has good exploitation ability but may classifier performed well on training image, but poorly on new query
get stuck into local minima, therefore, we proposed to combine PSO images. (iii) Their algorithms were test on a small dataset (66 images
with ABC. Shi et al. (2010) proposed an IABAP algorithm. El-Abd or even less), which may undermine the reliability (Button et al.,
(2011) proposed ABC-SPSO algorithm. Kiran and Gunduz (2013) 2013).
proposed a recombination-based a Hybridization of PSO and ABC To address those problems, we suggested three potential improve-
(HPA). In following text, we applied these three different hybridiza- ments: We employed stationary wavelet transform (SWT), which is
tion methods to FNN, and proposed three new classification methods translation invariant, to replace DWT, proposed three FNN variants
of IABAP-FNN, ABC-SPSO-FNN, and HPA-FNN. based on three different hybridization methods of PSO and ABC,
The structure of the rest is organized as follows. Next section and tested the proposed methods on three benchmark datasets, which
talks about the latest advances on MR brain image classification. consisted of 66, 160, and 255 images, respectively.
Section III describes every step of the proposed systems. Section IV
describes three new optimization methods, and proposes three new
classification methods based on them. Section V shows three data- III. METHODOLOGY
sets, and the setting of K-fold stratified cross validation. Experiments A. Discrete Wavelet Transform. The discrete wavelet trans-
in section VI compare our method with state-of-the-art methods. Sec- form (DWT) is a powerful implementation of the wavelet transform
tions VII and VIII are devoted to discussion and conclusion. For the (WT) using the dyadic scales and positions. The fundamental of
ease of reading, we explain the mathematical symbols in Appendix DWT are introduced as follows.
Table AI and the acronyms in Appendix Table AII (please refer to Suppose x(t) is a square-integrable function, then the continuous
the appendix). WT of x(t) relative to a given wavelet w(t) is defined as

154 Vol. 25, 153–164 (2015)


ð1
Cw ðfs ; ft Þ ¼ xðtÞwðtjfs ; ft Þdt (1)
21

Where

1 t2ft
wðtjfs ; ft Þ ¼ pffiffiffi wð Þ (2)
fs fs

Here, the wavelet w(t|fs, ft) is calculated from the mother wavelet
w(t) by translation and dilation: fs is the scale factor, ft the translation
factor (both real positive numbers), and C the coefficients of WT.
There are several different kinds of wavelets which have gained pop-
ularity throughout the development of wavelet analysis.
Equation (1) can be discretized by restraining fs and ft to a dis-
crete lattice (fs 5 2^ft and fs > 0) to give the DWT, which can be
expressed as follows.
Figure 1. A graphical illustration of e-decimated DWT (e 5 10,110,
hX i controlling the DS operator to reserve odd or even index). [Color fig-
Lðnjfs ; ft Þ ¼ DS n
xðnÞlfs ðn22fs ft Þ ure can be viewed in the online issue, which is available at wileyonli-
hX i (3) nelibrary.com.]
 fs
Hðnjfs ; ft Þ ¼ DS n
xðnÞhfs ðn22 ft Þ
sequence of 0 s and 1 s, namely, e 5 e1e2. . .eJ. This transform is
Here the coefficients L and H refer to the approximation compo- called the e-decimated DWT. A graphical example of e 5 10,010
nents and the detail components, respectively. The functions l(n) and was shown in Figure 1.
h(n) denote the low-pass filter and high-pass filter, respectively. The The SWT calculate all the e-decimated DWT for a given signal at
DS operator means the downsampling. one time. More precisely, for level 1, the SWT can be obtained by
The above decomposition process can be iterated with successive convolving the signal with the appropriate filters as in the DWT but
approximations being decomposed so that one signal is broken down without downsampling. Then the coefficients of the approximation
into various levels of resolution. The whole process is called a wave- and detail at level 1 are the same as the signal length.
let decomposition tree. The general step j convolves the approximation coefficients at
In applying this technique to MR images, the DWT is applied level j-1, with appropriate filters but without downsampling, to pro-
separately to each dimension. As a result, there are four subband duce the approximation and detail coefficients at level j. The sche-
(LL, LH, HH, and HL) images at each scale. The subband LL is then matic diagram is shown in Figure 2a. The algorithm of 1D-SWT can
used for the next decomposition. As the level of the decomposition be easily extended to the 2D case. Figure 2b shows the schematic
increased, we obtained more compact yet coarser approximation diagram of 2D-SWT.
components. Thus, wavelets provide a simple hierarchical frame-
work for interpreting the image information.
C. Feature Reduction. Excessive features increase computation
B. e-decimated DWT and Stationary WT. The DWT is trans- times and storage memory. Furthermore, they sometimes make clas-
lation variant, meaning that DWT of a translated version of a signal sification more complicated, which is called the curse of dimension-
x is not the translated version of the DWT of x. Suppose I denotes a ality. It is required to reduce the number of features. PCA is a
given MR image, and T the translation operator, then statistical procedure that uses an orthogonal transformation to con-
DWTðTðIÞÞ 6¼ T ðDWTðIÞÞ (4) vert a set of observations of possibly correlated variables into a set of
values of linearly uncorrelated variables called principal components
Formula (4) suggested the features obtained by DWT may change
(PC) (Zhang et al., 2014b,c,d). PCA is efficient to reduce the dimen-
remarkably when the brain MR image was only slightly shifted
sion of a data set, while retaining most of the variations. It has three
because of the dithering of the subject. In the worst cases, the DWT
effects: it orthogonalizes the components of the input vectors so that
based classification may even recognize two images from one subject
they are not correlated with each other, it orders the resulting orthog-
as two from different subjects, when the centers of the images are
onal components so that those with the largest variation come first,
located at slightly different positions.
and it eliminates those components contributing the least to the
How to preserve the translation invariance property lost by classi-
cal DWT? e-decimated DWT was proposed to solve the problem.
The DS in classical DWT, [Eq. (3)], retrains even indexed elements,
which is where the time/spatial variant problem lies in. To address
the problem, DS in e-decimated DWT chose predefined indexed ele-
ments (odd or even) instead of purely even indexed elements.
The choice concerns every step of the decomposition process. If
we perform all the different possible decompositions of the original
signal for a given maximum level J, then we will have 2J different
decompositions (Cherif et al., 2010).
Suppose ej 5 1 or 0 denotes the choices of odd or even indexed Figure 2. Schematic diagram of SWT. [Color figure can be viewed
elements at step j. Then, every decomposition is labeled by a in the online issue, which is available at wileyonlinelibrary.com.]

Vol. 25, 153–164 (2015) 155


variation in the data set. The input vectors should be normalized to Table I. Simple description of PSO.
have zero mean and unity variance before performing PCA. Details Algorithm PSO
about PCA could be found in literature.
Step 1 Initialize particle’s position, and the pbest, gbest, and velocity.
D. Feed-Forward Neural Network. Feed-forward neural net- Step 2 Repeat until a termination criteria is met
works (FNN) are widely used in pattern classification since they do a. Pick random numbers: r1, r2  U(0, 1)
not need any information about the probability distribution and the a b.Update particle’s velocity and position
priori probabilities of different classes. The training vectors were fed c.Update pbest and gbest
into the FNN, trained in batch mode (Guo et al., 2014). The network Step 3 Output gbest(t) that holds the best found solution.
configuration is supposed as NI 3 NH 3 NO, that is, a two-layer net-
work with NI input neurons, NH neurons in the hidden layer, and NO
output indicating the brain is normal or abnormal. IV. OPTIMIZATION METHODS
Assume x1 and x2 represent the connection weight matrix A. Particle Swarm Optimization. Particle swarm optimization
between the input layer and hidden layer, between the hidden layer (PSO) performs searching via a swarm of particles that updates from
and the output layer, respectively; then we can infer the training pro- iteration to iteration. To seek the optimal solution, each particle
cess described by the following equations to update these weighted moves in the direction to its previously best (pbest) position and the
values, which can be divided into four steps (Manoochehri and Kola- global best (gbest) position in the swarm (Zhang et al., 2014a).
han, 2014):
pbestði; tÞ ¼ arg min ½f ðPi ðkÞÞ; i 2 f1; 2; :::; NP g (10)
1. The outputs of all neurons in the hidden layer are calculated k¼1;:::;t
by: gbestðtÞ ¼ arg min ½f ðPi ðkÞÞ (11)
! i ¼ 1; :::; NP
X
NI
yj ¼ fH x1 ði; jÞxi j ¼ 1; 2;    ; NH (5) k ¼ 1; :::; t
i¼1

where i denotes the particle index, NP the total number of particles, t


Here xi denotes the ith input value, yj denotes the jth output of the the current iteration number, f the fitness function, and P the position.
hidden layer, and fH is referred to as the activation function of hidden The velocity V and position P of particles are updated by the follow-
layer, usually a sigmoid function as follows: ing equations.
1 Vi ðt11Þ ¼ xVi ðtÞ1c1 r1 ðpbestði; tÞ2Pi ðtÞÞ1c2 r2 ðgbestðtÞ2Pi ðtÞÞ (12)
fH ðxÞ ¼ (6)
11expð2xÞ Pi ðt11Þ ¼ Pi ðtÞ1Vi ðt11Þ (13)

where V denotes the velocity, x is the inertia weight used to balance


2. The outputs of all neurons in the output layer are given as the global exploration and local exploitation, r1 and r2 are uniformly
follows: distributed random variables within range [0, 1], c1 and c2 are posi-
! tive constant parameters called “acceleration coefficients.”
XNH
Ok ¼ fO x2 ðj; kÞyj k ¼ 1; 2; :::; NO (7) It is a common to set an upper bound for the velocity parameter.
j¼1 “Velocity clamping” (Shahzad et al., 2014) was used as a way to
limit particles flying out of the search space. Another method is the
“constriction coefficient” strategy, proposed by Clerc and Kennedy
Here fO denotes the activation function of output layer, usually a (2002), as an outcome of a theoretical analysis of swarm dynamic, in
line function. All weights are assigned with random values initially, which the velocities are constricted too.
and are modified by the delta rule according to the learning samples The first part of formula (12), known as “inertia,” represents the
traditionally. previous velocity, which provides the necessary momentum for par-
3. The error is expressed as the MSE of the difference ticles to roam across the search space. The second part, known as the
between output and target value (Zhang et al., 2014b,c,d) “cognitive” component, represents the individual particle thinking of
! each particle. It encourages the particles to move toward their own
XNO
best positions found so far. The third part, the “cooperation” compo-
El ¼ mse ðOk 2Tk Þ l ¼ 1; 2; :::NS (8) nent, represents the collaborative effect of the particles to find the
k¼1
global optimal solution (Zhang et al., 2014b,c,d).
Let f 5 RN &cenveo_unknown_entity_wingdings_F0E0; R be the
where Tk represents the kth value of the authentic labels which
cost function to be minimized. The function takes a candidate solu-
are already known to users, and NS represents the number of samples
tion of a vector of NP real numbers, and produces a real number as
(Poursamad, 2009).
output that indicates the cost function value. The gradient of f is
4. Suppose there are NS samples, then the fitness value is writ- either unknown or hard to calculate. The goal is to find the global
ten as minimal x*. The pseudocodes are listed in Table I.
XNS
FðxÞ ¼ El (9)
l¼1 B. Artificial Bee Colony. In nature, each bee only performs one
where x represents the vectorization of the (x1, x2). Our goal is to single task, whereas, through a variety of information communica-
minimize this fitness function F(x), viz., force the output values of tion ways between bees such as waggle dance and special odor, the
each sample approximate to corresponding target values. entire bee colony can easily find food resources that produce relative

156 Vol. 25, 153–164 (2015)


Table II. Four phases in ABC algorithm. Table III. Pseudocodes of IABAP.
Algorithm ABC Algorithm IABAP
I. Employed bees determine a food source within the neighborhood of the Step 1: Initialize ABC and PSO subsystems.
food source in their memory. Step 2: Execute employed bees’ search and onlookers’ selection and
II. Employed bees share their information with onlookers within the hive search processes over the bee colony.
and then the onlookers select one of the food sources. Step 3: Execute employed scouts’ search process. The start points be
III. Onlookers select a food source within the neighborhood of the food determined by a given probability or be obtained through the information
sources chosen by themselves. exchange.
IV. An employed bee of which the source has been abandoned becomes a Step 4: Execute particles’ search process. If the small given probability is
scout and starts to search a new food source randomly. satisfied, information exchange occurs.
Step 5: Memorize the best solution as the final solution and stop if the
best individual in any of the two subsystems satisfies the termination
high amount nectar, hence realize its self-organizing behavior (Kara- criterion.
boga and Basturk, 2008).
Artificial bee colony (ABC) is introduced by mimicking the Step 3 Produce new solutions (food source positions) tij in the
behavior of natural bees to construct a relative good solution of real- neighborhood of xij for the employed bees using the formula
istic optimization problems. The colony of artificial bees contains  
three groups of bees: employed bees, onlookers and scouts. First half tij ¼ xij 1Uij xij 2xkðiÞ;j (14)
of the colony consists of the employed artificial bees and the second Here k(i) is a solution in the neighborhood of i, U is a random
half includes the onlookers. For every food source, there is only one number in the range [–1, 1]. Evaluate the new solutions
employed bee, viz., the number of employed bees is equal to the Step 4 Apply the greedy selection process between xi and ti
number of food sources. The employed bee of an abandoned food Step 5 Calculate the probability values Pi for the solutions xi
source becomes a scout. The search carried out by the artificial bees by means of their fitness values using the equation:
can be summarized as following four phases in Table II. The detailed
fi
pseudocodes of the ABC algorithm are given below, and the general Pi ¼ (15)
flowchart of ABC algorithm was illustrated in Figure 3. X
N
fi
Step 1 Initialize the population of solutions xij and evaluate i¼1

the population Here, N denotes the number of solutions, and f denotes the fit-
Step 2 Repeat ness value;
Step 6 Normalize Pi values into [0, 1]
Step 7 Produce the new solutions (new positions) ti for the
onlookers from the solutions xi, selected depending on Pi, and
evaluate them
Step 8 Apply the greedy selection process for the onlookers
between xi and ti
Step 9 Determine the abandoned solution (source), if exists,
and replace it with a new randomly produced solution xi for the
scout using the equation:
 
xij ¼ minj 1uij 3 maxj 2minj (16)

Here uij is a random number in [0, 1].


Step 10 Memorize the best food source position (solution)
achieved so far
Step 11 Return to Step 2 until termination criteria is met.

C. Hybridization I—IABAP. Shi et al. (2010) proposed an inte-


grated algorithm based on artificial bee colony and particle swarm

Table IV. Pseudocodes of ABC-SPSO.


Algorithm ABC-SPSO
Step 1: Initialize and Evaluation the swarm.
Step 2: Repeat.
Step 3: For each particle, update velocity, position, and pbest.
Step 4: Update gbest of the whole swarm.
Step 5: For each particle, choose a different random particle and a random
problem variable, apply ABC update rule to pbest, and update pbest and
gbest.
Figure 3. Flowchart of ABC algorithm. [Color figure can be viewed
Step 6: Return to Step 2 until termination criterionis met.
in the online issue, which is available at wileyonlinelibrary.com.]

Vol. 25, 153–164 (2015) 157


Table V. Pseudocodes of HPA. Table VI. Pseudocodes of the proposed system.
Algorithm HPA Phase I: Offline Learning (Users are Scientists)
Step 1. The ground-truth mages were decomposed by 2D-SWT.
Step 1: Initialize the HPA, and determine gbest of PSO and best of ABC.
Step 2. PCA was carried out on the SWT coefficients, and PC coeffi-
Step 2: Repeat.
cient matrix was generated.
Step 3: Apply the recombination procedure to the gbest of PSO and best
Step 3. The set of reduced features along with the corresponding class
solutions of ABC to generate TheBest.
labels, were used to train the FNN. K-fold stratified CV was employed
Step 4: Update the velocities and positions of the particles, determine the
for get the out-of-sample evaluation.
personal bests of the particles and the gbest of the population.
Step 4. Report the classification performance.
Step 5: Carry out employed bee phase, onlooker bee phase, scout bee
Phase II: Online Prediction (Users are Doctors and Radiologists)
phase of ABC, and determine the best of the population.
Step 1. Users presented to the system the query image to be classified.
Step 6: Return to Step 2 until termination condition is met; report
Step 2. 2D-SWTwas performed on query image, and PC score was
TheBest.
obtained by multiplying SWT coefficients with PC coefficient matrix.
Step 3. The PC score of the query image was input to the previously
trained FNN.
optimization (IABAP), which begins with two subsystems of ABC
Step 4. The classifier labeled the input query image as normal or
and PSO, and executes them in parallel. Information exchange
abnormal.
occurs between PSO and ABC. The flow of information from bee
colony to particle swarm is based on scout bees. When a scout bee
exists in the ABC, a new solution is selected from the particle swarm another point of view, the proposed systems consist of three different
for this scout bee. On the other hand, the information flow from bee stages (Figure 4 and Table VI):
colony to particle swarm happens in the way that particle velocity is
affected by the bee colony at a certain probability level when the par-  Feature extraction: SWT
ticle velocity is updated. Table III listed its pseudocodes.  Feature reduction: PCA
 Classification: FNN trained by IABAP, ABC-SPSO, and HPA

D. Hybridization II—ABC-SPSO. El-Abd (2011) proposed The system is coherent with existing classification systems. The
artificial bee colony-standard particle swarm optimization (ABC- implementation of the proposed system is two-fold: offline learning
SPSO) algorithm. They considered the update equation of ABC only with the aim of training the classifier, and online prediction with the
updates a single problem variable at a time after which the new solu- aim of predict normal/abnormal labels for subjects.
tion is re-evaluated. This component is added to PSO after the main
loop. For every particle i in the swarm, the ABC update equation is V. MATERIALS AND ASSESSMENT
applied to its personal best pbest solutions of the particles. This is A. Three Benchmark Dataset. Three different benchmark MR
done after randomly selecting another particle k and a random prob- image datasets, that is, Dataset-66, Dataset-160, and Dataset-255, were
lem variable j. Table IV showed the pseudocodes of ABC-SPSO used for test in this study. All datasets consist of T2-weighted MR brain
images in axial plane and 256 3 256 in-plane resolution, which were
downloaded from the website of Harvard Medical School with URL of
E. Hybridization III—HPA. Kiran and Gunduz (2013) proposed http://med.harvard.edu/AANLIB/. Dataset-66 and Dataset-160 were
a recombination-based Hybridization of PSO and ABC (HPA). The already widely used in brain MR image classification. They consist of
best solutions of the populations obtained at each iteration of the abnormal images from seven types of diseases along with normal
PSO and ABC are recombined, and the solution (referred as images. The abnormal brain MR images of the two datasets consist of
“TheBest”) obtained from recombination is given to the PSO and following diseases: glioma, meningioma, Alzheimer’s disease, Alzhei-
onlooker bees of ABC as global best (gbest) and neighbor, respec- mer’s disease plus visual agnosia, Pick’s disease, sarcoma, and Hun-
tively. Therefore, TheBest provides HPA with better global search tington’s disease. Das et al. (2013) proposed the third dataset “Dataset-
and exploitation abilities. Table V listed the pseudocodes of HPA. 255,” which contains 11 types of diseases, among which seven types of
diseases are the same as the Dataset-66 and Dataset-160 mentioned
F. Proposed Methods. Based on FNN and the three different before, and four new types of diseases (chronic subdural hematoma,
hybridization optimization methods, we proposed three novel FNN cerebral toxoplasmosis, herpes encephalitis, and multiple sclerosis)
variants: IABAP-FNN, ABC-SPSO-FNN, and HPA-FNN. From were included. Figure 5 shows samples of brain MR images.

Figure 4. Flowchart of the proposed system. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

158 Vol. 25, 153–164 (2015)


Figure 5. Sample of brain MR images.

B. CV Setting. Following common convention and ease of strati- VI. EXPERIMENT RESULTS
fied cross validation, 10 3 6-fold stratified cross validation (CV) was The experiments were carried out on the platform of IBM machine
used for Dataset-66, and 10 3 5-fold stratified CV was used for the with 3 GHz core i3 processor and 8 GB RAM, running under Win-
other two datasets. Table VII shows the statistical characteristics and dows 7 operating system. The algorithm was in-house developed via
CV setting of the three datasets. Matlab 2014a (The Mathworks V C ).

A. Feature Illustration. First, we carried out both DWT and


C. Experiment Design. We designed four experiment tasks in
SWT on a normal MR brain image, respectively. three-level Haar
this study.
wavelet was utilized. For DWT, the size of the coefficients in level j
I. We gave a visual illustration of extracted features from a is 2–j of original size on x and y direction, whereas the size of the
normal brain and an abnormal brain by proposed SWT with coefficients for SWT in any level is the same as the size of original
traditional DWT methods. three-level Haar wavelet was image. Figure 6a shows the normal brain MR image. Figure 6b
used. shows the three-level of DWT decomposition result, compared with
II. We employed PCA to reduce the SWT coefficients with the three-level 2D-SWT decomposition result shown in Figure 6c, in
criterion of preserving at least 80% energy. which 10 subbands are stored as H1, H2, H3, V1, V2, V3, D1, D2,
III. We compared the proposed three classification methods D3, and A3.
with state-of-the-art classification methods.
IV. We analyzed the computation time for every step of offline B. Feature Reduction. We performed PCA on the combination
learning and online prediction. of three datasets. Each image was reshaped as a row vector, and all

Vol. 25, 153–164 (2015) 159


Table VII. Statistical characteristics and CV setting of the three datasets.
Total Training Validation

Dataset Normal Abnormal Normal Abnormal Normal Abnormal K-Fold Run


Dataset-66 18 48 15 40 3 8 6 10
Dataset-160 20 140 16 112 4 28 5 10
Dataset-255 35 220 28 177 7 43 5 10

images were aligned to form a two-dimensional matrix. Results in SVM 1 IPOL (Zhang and Wu, 2012), DWT 1 PCA 1 SVM 1 GRB
Figure 7 show only seven PCs can preserve at least 80% of the total (Zhang and Wu, 2012), WE 1 SWP 1 PNN (Saritha et al., 2013), and
energy (the red line represents the energy threshold). We did not set RT 1 PCA 1 LS-SVM (Das et al., 2013)), on the basis of averaging
the energy threshold as 95% or higher as common, since that will the results of 10 repetition of either 5-fold or 6-fold stratified CV.
yield too many features, which will levy a heavy computation burden For the purpose of fair comparison, the population sizes of the
for following classifiers. algorithms are taken as 100, that is, consisting of 50 particles, 25
employed bees, and 25 onlooker bees. c1 and c2 of PSO were assigned
C. Classification Comparison. We compared the proposed with the values of 2, and x was assigned with the value of 0.75. The
three methods (SWT 1 PCA 1 IABAP-FNN, SWT 1 PCA 1 ABC- algorithms terminated when the maximum iteration number of 5000
SPSO-FNN, SWT 1 PCA 1 HPA-FNN) with state-of-the-art methods was reached. The comparison result was shown in Table VIII, which
(DWT 1 SOM (Chaplot et al., 2006), DWT 1 SVM (Chaplot et al., also showed the feature vector dimension of each scheme.
2006), DWT 1 SVM 1 POLY (Chaplot et al., 2006), It was clear from Table VIII that the proposed
DWT 1 SVM 1 RBF (Chaplot et al., 2006), DWT 1 PCA 1 FP-ANN SWT 1 PCA 1 IABAP-FNN obtained accuracy of 100.00, 99.44, and
(El-Dahshan et al., 2010), DWT 1 PCA 1 KNN (El-Dahshan et al., 99.18% over Dataset-66, Dataset-160, and Dataset-255, respectively.
2010), DWT 1 PCA 1 SVM (Zhang and Wu, 2012), DWT 1 The proposed SWT 1 PCA 1 ABC-SPSO-FNN method obtained
PCA 1 SVM 1 HPOL (Zhang and Wu, 2012), DWT 1 PCA 1 100.00, 99.75, and 99.02% over three datasets, respectively. Finally,

Figure 6. The comparison between DWT and SWT. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.
com.]

160 Vol. 25, 153–164 (2015)


Table IX. Average measures of the proposed SWT 1 PCA 1 HPA-FNN
method.
Dataset Sensitivity Specificity Precision Accuracy
Dataset-66 100.00 100.00 100.00 100.00
Dataset-160 100.00 100.00 100.00 100.00
Dataset-255 99.37 100.00 100.00 99.45

(Chaplot et al., 2006), DWT 1 SVM (Chaplot et al., 2006),


DWT 1 SVM 1 POLY (Chaplot et al., 2006), DWT 1 SVM 1 RBF
(Chaplot et al., 2006), DWT 1 PCA 1 FP-ANN (El-Dahshan et al.,
2010), DWT 1 PCA 1 KNN (El-Dahshan et al., 2010), DWT 1
PCA 1 SVM (Zhang and Wu, 2012), DWT 1 PCA 1 SVM 1 H-
Figure 7. Feature reduction result. [Color figure can be viewed in POL (Zhang and Wu, 2012), DWT 1 PCA 1 SVM 1 IPOL (Zhang
the online issue, which is available at wileyonlinelibrary.com.] and Wu, 2012), DWT 1 PCA 1 SVM 1 GRB (Zhang and Wu,
2012), WE 1 SWP 1 PNN (Saritha et al., 2013), and
the proposed SWT 1 PCA 1 HPA-FNN obtained 100.00, 100.00, and RT 1 PCA 1 LS-SVM (Das et al., 2013).
99.45% over three datasets, respectively. Why HPA-FNN performed the best among three proposed methods
In addition, the average sensitivity, average specificity, and aver- (Table VIII)? The reason was it used exploration ability of ABC and
age precision of the proposed SWT 1 PCA 1 HPA-FNN based on the exploration ability of PSO at the same time. In the HPA, the best
10 repetitions are shown in Table IX. The results showed that the solutions obtained by ABC and PSO are recombined and the corre-
SWT 1 PCA 1 HPA-FNN performed perfect on the two relatively sponding result (TheBest) is given to the PSO as gbest and to the ABC
small dataset. For the largest dataset “Dataset-255,” the as neighbor of onlooker bees. Therefore, HPA keeps the best solution
SWT 1 PCA 1 HPA-FNN achieved average sensitivity of 99.37%, (TheBest) instead of self-best solution (Kiran and Gunduz, 2013).
average accuracy of 99.45%, average specificity of 100.00%, and Let us revisit the IABAP that also combined ABC and PSO. It uses
average precision of 100.00%. two information exchanging strategies to share information between
two subsystems as the particle swarm and bee colony (Shi et al., 2010).
D. Time Analysis. Finally, computation time was investigated as For the first strategy, the food source position randomly selected is
another important measure of the classifier. Table X recorded the used for updating velocities of the particles in PSO, which causes slow
consumed time of each step of the proposed “SWT 1 PCA 1 HPA- convergence for unimodal functions. For the second strategy, the scout
FNN” method. For the offline learning phase, the three procedures, bees in ABC are taken from PSO, reducing the global search ability of
that is, SWT, PCA, and classifier training, cost 0.725, 0.519, and ABC for multimodal functions. In the same way, the ABC-SPSO uses
217.833 s, respectively. For the online learning phase, the three pro- only update rule of ABC in the PSO (El-Abd, 2011), hence, the scout
cedures, that is, SWT, PC score obtaining, and prediction, cost bee mechanism of ABC is neglected in their algorithm. This is the rea-
0.011, 0.004, and 0.001 s. son why the global search ability of ABC-SPSO is impaired.
From Table X, we can calculate that the offline learning cost
VII. DISCUSSION totally 0.725 1 0.519 1 217.833 5 219.077 s. To predict a query
A. Discussion on Results. Results in Table VIII show that the image, the computer cost only 0.011 1 0.004 1 0.001 5 0.016 s,
HPA-FNN performs better than the other two proposed classification which was rather fast and met real-time requirement.
methods. Additional to it, we can find the SWT 1 PCA 1 HPA-FNN The SWT and PCA operations in offline learning phase cost much
excelled other state-of-the-art algorithms including DWT 1 SOM time than those in online prediction phase, because offline learning

Table VIII. Accuracy comparison, some data extracted from (Das et al., 2013).
Existing approaches Feature No. Dataset-66 Dataset-160 Dataset-255
DWT 1 SOM (Chaplot et al., 2006) 4761 94.00 93.17 91.65
DWT 1 SVM (Chaplot et al., 2006) 4761 96.15 95.38 94.05
DWT 1 SVM 1 POLY (Chaplot et al., 2006) 4761 98.00 97.15 96.37
DWT 1 SVM 1 RBF (Chaplot et al., 2006) 4761 98.00 97.33 96.18
DWT 1 PCA 1 FP-ANN (El-Dahshan et al., 2010) 7 97.00 96.98 95.29
DWT 1 PCA 1 KNN (El-Dahshan et al., 2010) 7 98.00 97.54 96.79
DWT 1 PCA 1 SCABC-FNN (Zhang et al., 2011) 19 100.00 99.27 98.82
DWT 1 PCA 1 SVM (Zhang and Wu, 2012) 19 96.01 95.00 94.29
DWT 1 PCA 1 SVM 1 HPOL (Zhang and Wu, 2012) 19 98.34 96.88 95.61
DWT 1 PCA 1 SVM 1 IPOL (Zhang and Wu, 2012) 19 100.00 98.12 97.73
DWT 1 PCA 1 SVM 1 GRB (Zhang and Wu, 2012) 19 100.00 99.38 98.82
WE 1 SWP 1 PNN (Saritha et al., 2013) 3 100.00 99.94 98.86
RT 1 PCA 1 LS-SVM (Das et al., 2013) 9 100.00 100.00 99.39
Proposed Approaches Feature No. Dataset-66 Dataset-160 Dataset-255
SWT 1 PCA 1 IABAP-FNN 7 100.00 99.44 99.18
SWT 1 PCA 1 ABC-SPSO-FNN 7 100.00 99.75 99.02
SWT 1 PCA 1 HPA-FNN 7 100.00 100.00 99.45

Vol. 25, 153–164 (2015) 161


Table X. Computation time analysis of SWT 1 PCA 1 HPA-FNN based Future work should focus on the following four aspects: (i) we
on Dataset-255. shall include other advanced imaging modalities, such as DTI and
Offline Learning Time (s) MRSI, (ii) the classification performance may increase by using
SWT 0.725 other advanced wavelet transforms and decomposition level, (iii)
PCA 0.519 additional advanced feature extraction method may be tested, such as
HPA-FNN training 217.833 fractional wavelet (Yang et al., 2013), wavelet packet analysis, dual
Online Prediction Time (s) tree complex wavelet transform (Kadiri et al., 2014), second genera-
SWT 0.011 tion wavelet transform, fractional calculus (Yang et al., 2014a,b),
PC scores 0.004 etc., and (iv) novel classification methods will be tested, such as
Prediction 0.001
deep leaning, RBFNN, and others.

needed to operate over 255 images whereas online prediction just


focus on one query image. Additional to it, the PCA in the offline VIII. CONCLUSION
learning phase has generated the PC coefficient matrix, hence, the pre- In this study, we employed SWT to replace DWT method for feature
diction directly obtained PC score by multiplying the extracted fea- extraction, employed PCA for feature reduction, and proposed three
tures with PC coefficient matrix. The HPA-FNN training cost novel classification methods (IABAP-FNN, ABC-SPSO-FNN, and
217.833 s, which is rather considerable time. It is lucky that the predic- HPA-FNN) for developing a CAD tool for identifying abnormal
tion procedure did not need to re-train the FNN and can obtained the brains. The experiments over three datasets of different sizes demon-
classification output by submitting the PC score to the trained FNN. strated the SWT 1 PCA 1 HPA-FNN outperformed other two
proposed methods and existing state-of-the-art abnormal brain CAD
A. Discussion on Methodology. There are sufficient reasons methods.
why we choose this “SWT 1 PCA 1 HPA-FNN” method. For the
SWT, it overcomes the shortcomings of translation-invariance of the APPENDIX
plain DWT, by removing the downsampling and upsampling opera-
tions. Although SWT is inherently redundant, it is worthy to apply it
Table AI. Definition of mathematical symbols.
in this study considering its excellent classification performance.
The PCA was essential in this study. The features of SWT by Symbol (DWT) Definition
three-level haar wavelet reached as high as roughly 256 3 256 3 t, n Continuous and discrete time step
10 5 655,360, too large for any existing classifier to learn. The PCA x, I 1D and 2D signal
w Wavelet function
reduced the features to merely 7, not only reducing the calculation
fs, f t Scale and translation factor
complexity but also making classifier training feasible. C Coefficients of wavelet decomposition
The FNN was chosen since it was able to classify nonlinear sepa- L, H Approximation and detail components of C
rable patterns and approximate an arbitrary continuous function. To l, h Low-pass and high-pass filters
increase the performance of traditional FNN, we proposed three new Translation operator
FNN variants as IABAP-FNN, ABC-SPSO-FNN, and HPA-FNN. Symbols (SWT) Definition
The HPA-FNN performed best and excelled state-of-the-art MR brain e Decimation of DWT
image classification methods, because HPA combined the advantages j Decomposition level
of the exploration ability of ABC and the exploitation ability of PSO. Symbols (FNN) Definition
NI, NH, NO Number of input, hidden, and output neurons
f H, f O Activation function of hidden and output layer
B. Contribution, Limitation, and Future Research. The
x, y, o Input, hidden, and output values
proposed method largely solved the problem raised in the introduc- x1, x2 Weight matrix between input and hidden
tion. The contributions of this study centered in following four layers, and between hidden and output layers
aspects: (i) we used SWT that was translational-invariant, and proved x Vectorization of the (x1, x2)
it was better than DWT; (ii) we proposed three new classification T, O Authentic and output labels
methods: IABAP-FNN, ABC-SPSO-FNN and HPA-FNN. We proved Symbols (PSO) Definition
the HPA-FNN performed the best among the proposed methods; (iii) pbest, gbest Previous and global best
we proved the proposed SWT 1 PCA 1 HPA-FNN methods achieved NP Number of particles
better classification than existing state-of-the-art methods; and (iv) we t Iteration number
proved the computation time of online prediction of the proposed sys- f Fitness function
i Particle index
tem is rapid and reach the requirement of realistic use.
P, V Position and velocity of particles
Three limitations were revealed after reconsidering the whole x Inertia weight
proposed method. First, FNN established machine-oriented rules not c1, c2 Two acceleration coefficients
human-oriented rules, that is, technicians cannot understand or inter- r1, r2 Two random numbers
pret what the weights/biases of the classifier mean, in spite of Symbols (ABC) Definition
machine-oriented rules yielded higher classification performance t Food source position
than human-oriented rules. Second, the three-level haar wavelet was x Solution
harnessed due to its extensive use in other literatures; however, a u, U Two random numbers
problem was raised how to choose the optimal wavelet and optimal k Solution in the neighborhood of i
decomposition level. Third, larger dataset was expected to validate P Probability
N Number of solutions
the effectiveness of the proposed method.

162 Vol. 25, 153–164 (2015)


Table AII. Abbreviation list. Eurasip J Image Video Process 2014 (2014), 1–16, doi:10.1186/1687-5281-
2014-41.
Abbreviation Definition
(A)(BP)(F)NN (Artificial) (back-propagation) H. Kalbkhani, M.G. Shayesteh, and B. Zali-Vargahan, Robust algorithm
(feed-forward) neural network for brain magnetic resonance image (MRI) classification based on
(D)(S)WT (Discrete) (stationary) wavelet transform GARCH variances series, Biomed Signal Process Control 8 (2013), 909–
(k)SVM (Kernel) support vector machine 919.
ABC Artificial bee colony D. Karaboga and B. Basturk, On the performance of artificial bee colony
ABC-SPSO Artificial bee colony—standard particle (ABC) algorithm, Appl Soft Comput 8 (2008), 687–697.
swarm optimization
T. Kelleg€
oz, B. Toklu, and J. Wilson, Elite guided steady-state genetic algo-
CAD Computer-aided diagnosis
rithm for minimizing total tardiness in flowshops, Comput Ind Eng 58
DS Downsampling
(2010), 300–306.
FNN Feed-forward neural network
HPA Hybridization of PSO and ABC M.S. Kiran and M. Gunduz, A recombination-based hybridization of parti-
IABAP Integrated algorithm based on artificial bee cle swarm optimization and artificial bee colony algorithm for continuous
colony and particle swarm optimization optimization problems, Appl Soft Comput 13 (2013), 2188–2203.
KNN K-nearest neighbors S. Kiranyaz, T. Ince, A. Yildirim, and M. Gabbouj, Evolutionary artificial
MR(I) Magnetic resonance (imaging) neural networks by multi-dimensional particle swarm optimization, Neural
PC(A) Principal component (analysis) Networks 22 (2009), 1448–1462.
PNN Probabilistic neural network
M. Maitra and A. Chatterjee, A Slantlet transform based intelligent system
RT Ripplet transform
for magnetic resonance brain image classification, Biomed Signal Process
SCABC Scaled chaotic artificial bee colony
Control 1 (2006), 299–306.
SCG Scaled conjugate gradient
SOM Self-organizing map P. Maji, M.K. Kundu, and B. Chanda, Second order fuzzy measure and
weighted co-occurrence matrix for segmentation of brain MR images, Fund
Inform 88 (2008), 161–176.
REFERENCES
M. Manoochehri and F. Kolahan, Integration of artificial neural network and
S.U. Aswathy, G.G. Deva Dhas, and S.S. Kumar, A survey on detection of simulated annealing algorithm to optimize deep drawing process, Int J Adv
brain tumor from MRI brain images, Control, Instrumentation, Communica- Manuf Technol 73 (2014), 241–249.
tion and Computational Technologies (ICCICCT), International Conference,
2014. A. Poursamad, Adaptive feedback linearization control of antilock braking
systems using neural networks, Mechatronics 19 (2009), 767–773.
K.S. Button, J.P.A. Ioannidis, C. Mokrysz, B.A. Nosek, J. Flint, E.S.J.
Robinson, and M.R. Munafo, Power failure: Why small sample size under- R. Ramasamy and P. Anandhakumar, Brain Tissue Classification of MR
mines the reliability of neuroscience, Nat Rev Neurosci 14 (2013), 365–376. Images Using Fast Fourier Transform Based Expectation- Maximization
Gaussian Mixture Model, in Advances in Computing and Information
S. Chaplot, L.M. Patnaik, and N.R. Jagannathan, Classification of magnetic Technology. vol. 198, D. C. Wyld, M. Wozniak, N. Chaki, N. Meghana-
resonance brain images using wavelets as input to support vector machine than, and D.Nagamalai, Eds., ed Berlin: Springer-Verlag Berlin, 2011,
and neural network, Biomed Signal Process Control 1 (2006), 86–92. pp. 387–398.
L.H. Cherif, S.M. Debbal, and F. Bereksi-Reguig, Choice of the wavelet M. Saritha, K. Paul Joseph, and A.T. Mathew, Classification of MRI
analyzing in the phonocardiogram signal analysis using the discrete and the brain images using combined wavelet entropy based spider web plots
packet wavelet transform, Expert Syst Appl 37 (2010), 913–918. and probabilistic neural network, Pattern Recogn Lett 34 (2013), 2151–
M. Clerc and J. Kennedy, The particle swarm—Explosion, stability, and 2156.
convergence in a multidimensional complex space, IEEE Trans Evol Com- F. Shahzad, S. Masood, and N.K. Khan, Probabilistic opposition-based par-
put 6 (2002), 58–73. ticle swarm optimization with velocity clamping, Knowledge Inform Syst
39 (2014), 703–737.
S. Das, M. Chowdhury, and M.K. Kundu, Brain MR image classification
using multiscale geometric analysis of ripplet, Progr Electromagn Res Pier X. Shi, Y. Li, H. Li, R. Guan, L. Wang, and Y. Liang, An integrated
137 (2013), 1–17. algorithm based on artificial bee colony and particle swarm optimiza-
tion, Natural Computation (ICNC), Sixth International Conference,
Z. Dong, Y. Zhang, F. Liu, Y. Duan, A. Kangarlu, and B.S. Peterson,
2010.
Improving the spectral resolution and spectral fitting of 1H MRSI data from
human calf muscle by the SPREAD technique, NMR Biomed 27 (2014), Y. Sun, Q.Y. Wen, Y.D. Zhang, and W.M. Li, Privacy-preserving self-
1325–1332. helped medical diagnosis scheme based on secure two-party computation in
wireless sensor networks, Comput Math Methods Med 2014 (2014) 9, doi:
M. El-Abd, A hybrid ABC-SPSO algorithm for continuous function optimi- 10.1155/2014/214841.
zation, Swarm Intelligence (SIS), IEEE Symposium, 2011.
A.M. Yang, J. Li, H.M. Srivastava, G.N. Xie, and X.J. Yang, Local frac-
E.S.A. El-Dahshan, T. Hosny, and A.B.M. Salem, Hybrid intelligent techni- tional Laplace variational iteration method for solving linear partial differ-
ques for MRI brain images classification, Digital Signal Process 20 (2010), ential equations with local fractional derivative, Discrete Dynamics Nat Soc
433–441. 2014 (2014a), 8, doi:10.1155/2014/365981.
S. Goh, Z. Dong, Y. Zhang, S. DiMauro, and B.S. Peterson, Mitochondrial X.J. Yang, D. Baleanu, H.M. Srivastava, and J.A.T. Machado, On local frac-
dysfunction as a neurobiological subtype of autism spectrum disorder: Evi- tional continuous wavelet transform, Abstract Appl Anal 2013, (2013) 5,
dence from brain imaging, JAMA Psychiatr 71 (2014), 665–671. doi:10.1155/2013/725416.
D.L. Guo, Y.D. Zhang, Q. Xiang, and Z.H. Li, Improved radio frequency X.J. Yang, J. Hristov, H.M. Srivastava, and B. Ahmad, Modelling fractal
identification indoor localization method via radial basis function neural waves on shallow water surfaces via local fractional Korteweg-de Vries
network, Math Prob Eng 2014 (2014), 9, doi:10.1155/2014/420482. equation, Abstract Appl Anal 2014 (2014b), 10, doi:10.1155/2014/278672.
M. Kadiri, M. Djebbouri, and P. Carre, Magnitude-phase of the dual-tree Y.-D. Zhang and L. Wu, Weights optimization of neural network via
quaternionic wavelet transform for multispectral satellite image denoising, improved BCO approach, Progr Electromagn Res 83 (2008), 185–198.

Vol. 25, 153–164 (2015) 163


Y. Zhang, S. Balochian, P. Agarwal, V. Bhatnagar, and O.J. Y. Zhang, S. Wang, G. Ji, and P. Phillips, Fruit classification using computer
Housheya, Artificial intelligence and its applications, Math Prob Eng vision and feedforward neural network, J Food Eng 143 (2014c), 167–177.
2014 (2014a), 10, doi:10.1155/2014/840491. Y. Zhang, S. Wang, P. Phillips, and G. Ji, Binary PSO with mutation opera-
Y. Zhang, Z. Dong, L. Wu, and S. Wang, A hybrid method for MRI brain tor for feature selection using decision tree applied to spam detection,
image classification, Expert Syst Appl 38 (2011a), 10049–10053. Knowledge-Based Syst 64 (2014d), 22–31.
Y. Zhang, S. Wang, and Z. Dong, Classification of Alzheimer disease based Y. Zhang and L. Wu, An MR brain images classifier via principal compo-
on structural magnetic resonance imaging by Kernel support vector machine nent analysis and Kernel support vector machine, Progr Electromagn Res
decision tree, Progr Electromagn Res 144 (2014b), 171–184. 130 (2012), 369–388.
Y. Zhang, S. Wang, G. Ji, and Z. Dong, An MR brain images classifier sys- Y. Zhang, L. Wu, and S. Wang, Magnetic resonance brain image classifica-
tem via particle swarm optimization and Kernel support vector machine, Sci tion by an improved artificial Bee colony algorithm, Progr Electromagn Res
World J 2013 (2013), 2013. 116 (2011b), 65–79.

164 Vol. 25, 153–164 (2015)


Copyright of International Journal of Imaging Systems & Technology is the property of John
Wiley & Sons, Inc. and its content may not be copied or emailed to multiple sites or posted to
a listserv without the copyright holder's express written permission. However, users may
print, download, or email articles for individual use.

You might also like