New Bandwidth Selection Criterion For Kernel PCA Approach To Dimensionality Reduction and Classification Problems
New Bandwidth Selection Criterion For Kernel PCA Approach To Dimensionality Reduction and Classification Problems
Abstract
Background: DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment
selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a
decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or
clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few
observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature
selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics.
Several prediction tools are available based on these techniques.
Results: Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality
reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we
propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation
for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support
Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we
compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other
well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of
Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of
the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies.
Conclusion: We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature
transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both
feature selection and feature transformation perform well on classification tasks. Due to the dynamic selection
property of feature selection, it is hard to define significant features for the classifier, which predicts classes of future
samples. Moreover, the proposed strategy enjoys a distinctive advantage with its relatively lesser time complexity.
© 2014 Thomas et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative
Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly credited.
Thomas et al. BMC Bioinformatics 2014, 15:137 Page 2 of 12
http://www.biomedcentral.com/1471-2105/15/137
genes that best characterize each class. LS-SVM is a used kernel function
is radial
basis function (RBF) kernel:
promising method for classification, because of its solid x −x 2
K (xi , xj ) = exp − i2h2j (RBF kernel with bandwidth
mathematical foundations which convey several salient
h). Traditionally the optimal parameters (bandwidth and
properties that other methods hardly provide. A com-
number of principal components) of RBF kernel function
monly used technique for feature selection, t-test, assumes
are selected in a trial and error fashion.
that the feature values from two different classes follow
Pochet et al. [17] proposed an optimization algorithm
normal distributions. Several studies, especially microar-
for KPCA with RBF kernel followed by Fisher Discrimi-
ray analysis, have used t-test and LS-SVM together to
nant Analysis (FDA) to find the parameters of KPCA. In
improve the prediction performance by selecting key fea-
this case, the parameter selection is coupled with the cor-
tures [6,7]. The Least Absolute Shrinkage and Selection
responding classifier. This means that the performance of
Operator (Lasso) [8] is often used for gene selection
the final procedure depends on the chosen classifier. Such
and parameter estimation in high-dimensional microar-
a procedure could produce possible inaccurate results in
ray data [9]. The Lasso shrinks some of the coefficients to
the case of weak classifiers. In addition, this appears to be
zero, and extend of shrinkage is determined by the tuning
a time consuming procedure, while tuning the parameters
parameter, often obtained from cross validation.
of KPCA.
Inductive learning systems were successfully applied
Most classification methods have inherent problem with
in a number of medical domains, e.g. in localization of
high dimensionality of microarray data and hence require
primary tumors, prognostic of recurring breast cancer,
dimensionality reduction. The ultimate goal of our work is
diagnosis of thyroid diseases, and rheumatology [10]. An
to design a powerful preprocessing step, decoupled from
induction algorithm is used to learn a classifier, which
the classification method, for large dimensional data sets.
maps the space of feature values into the set of class values.
In this paper, initially we explain an SVM approach to
This classifier is later used to classify new instances, with
PCA and LS-SVM approach to KPCA. Next, by following
the unknown classifications (class labels). Researchers and
the idea of least squares cross-validation in kernel den-
practitioners realize that the effective use of these induc-
sity estimation, we propose a new data-driven bandwidth
tive learning systems requires data preprocessing, before
selection criterion for KPCA. The tuned LS-SVM formu-
a learning algorithm could be applied [11]. Due to the
lation to KPCA is applied to several data sets and serves as
instability of feature selection techniques, it might be
a dimensionality reduction technique for a final classifica-
difficult or even impossible to remove irrelevant and/or
tion task. In addition, we compared the proposed strategy
redundant features from a data set. Feature transforma-
with an existing optimization algorithm for KPCA, as well
tion techniques, such as KPCA, discover a new feature
as with other preprocessing steps. Finally, for the sake
space having fewer dimensions through a functional map-
of comparison, we applied LS-SVM on whole data sets,
ping, while keeping as much information, as possible in
PCA+LS-SVM, t-test + LS-SVM, PAM and Lasso. Ran-
the data set.
domization on all data sets are carried out in order to get
KPCA, which is a generalization of PCA, a nonlin-
a more reliable idea of the expected performance.
ear dimensionality reduction technique that has proven
to be a powerful pre-processing step for classification
algorithms. It has been studied intensively in the last Data sets
several years in the field of machine learning and has In our analysis, we collected 11 publicly available binary
claimed success in many applications [12]. An algorithm class data sets (diseased vs. normal). The data sets
for classification using KPCA was developed by Liu et al. are: colon cancer data [18,19], breast cancer data [20],
[13]. KPCA was proposed by Schölkopf and Smola [14], pancreatic cancer premalignant data [21,22], cervical
by mapping features sets to a high-dimensional feature cancer data [23], acute myeloid leukemia data[24], ovarian
space (possibly infinite) and applying Mercer’s theorem. cancer data [21], head & neck squamous cell carcinoma
Suykens et al. [15,16] proposed a simple and straightfor- data [25], early-early stage duchenne muscular dystrophy
ward primal-dual support vector machine formulation to (EDMD) data [26], HIV encephalitis data [27], high grade
the PCA problem. glioma data [28], and breast cancer data [29]. In breast
To perform KPCA, the user first transforms the input cancer data [29] and high grade glioma data, all data sam-
data x from the original input space F0 into a higher- ples have already been assigned to a training set or test
dimensional feature space F1 with a nonlinear transform set. The breast cancer data in [29] contains missing values;
x → (x) where is a nonlinear function. Then a ker- those values have been imputed based on the nearest
nel matrix K is formed using the inner products of new neighbor method.
feature vectors. Finally, a PCA is performed on the cen- An overview of the characteristics of all the data sets can
tralized K, which is an estimate of the covariance matrix be found in Table 1. In all the cases, 2/3rd of the data sam-
of the new feature vectors in F1 . One of the commonly ples of each class are assigned randomly to the training
Thomas et al. BMC Bioinformatics 2014, 15:137 Page 3 of 12
http://www.biomedcentral.com/1471-2105/15/137
Table 1 Summary of the 11 binary disease data sets These error variables are maximized for the given N data
Data set #Samples #Genes points while keeping the norm of v small by the regular-
Class 1 Class 2 ization term. The value γ is a positive real constant. The
Lagrangian becomes
1: Colon 22 40 2000
2: Breast cancer I 34 99 5970
1 2 1 T
N N
3: Pancreatic 50 50 15154 L(v, e; α) = γ ek − v v − αk ek − vT xk
2 2
4: Cervical 8 24 10692 k=1 k=1
to keep the norm of v small. The following optimization function, the following problem is formulated for kernel
problem is formulated now in the primal weight space PCA [34].
1
N
1 2 1 T
N 1
max JP (v, e) = γ ek − v v (3) max JP (v, e) = γ L1 (ek ) − vT v
v,e 2 2 v,e 2 2
k=1 k=1
ek = vT (φ(xk ) − μφ ), k = 1, . . . , N. ek = vT (φ(xk ) − μφ ), k = 1, . . . , N.
The Lagrangian yields We propose the following tuning criterion for the band-
width h which maximizes the L1 loss function of KPCA:
1 2 1 T
N N
L(v, e; α) = γ ek− v v− αk ek −vT φ (xk )− μ̂φ
2 2 J(h) = argmax E |zn (x)|dx, (4)
k=1 k=1
h∈R+
0
with conditions for optimality where E denotes the expectation operator. Maximizing
⎧ Eq. 4 would lead to overfitting since we used all the
⎪ N
⎪
⎪ ∂L
⎨ ∂v = 0 → v = αk φ (xk ) − μ̂φ training data in the criterion. Instead, we work with Leave-
∂L
k=1 One-Out cross validation (LOOCV) estimation of zn (x)
⎪
⎪ ∂ek = 0 → αk = γ ek k = 1, . . . , N
⎪
⎩ ∂L
to obtain the optimum bandwidth h of KPCA, which
∂αk = 0 → ek − vT φ (xk ) − μ̂φ = 0, k = 1, . . . , N. gives projected variables with maximal variance. A finite
By elimination of variables e and v, one obtains approximation to Eq. 4 is given by
N
1 N
T 1 (−j)
αk − αl φ (xl )− μ̂φ φ (xk )− μ̂φ = 0 k = 1,. . ., N. J(h) = argmax |zn (x)|dx (5)
γ h∈R + N
0 j=1
l=1
(−j)
Defining λ = 1
γ, one obtains the following dual problem where N is the number of samples and zn denotes the
score variable with the jth observation is left out. In case
c α = λα the leave-one-out approach is computationally expensive,
where c denotes the centered kernel matrix with ijth
one could replace it with a leave v group out strategy
entry: c,i,j = K (xi , xj ) − N1 N K (xi , xr ) − N1 N (v- fold cross-validation). Integration can be performed
N r=1 r=1
by means of any numerical technique. In our case, we
K (xj , xr ) + N12 N
r=1 s=1 K (xr , xs ).
have used trapezoidal rule. The final model with optimum
Data-driven bandwidth selection for KPCA bandwidth is constructed as follows:
Model selection is a prominent issue in all learning tasks, c,ĥ α = λα,
max
especially in KPCA. Since KPCA is an unsupervised (−j)
technique, formulating a data-driven bandwidth selection where ĥmax = maxh∈R+ N1 N j=1 |zn (x)|dx. Figure 1
0
criterion is not trivial. Until now, no such data-driven cri- shows the bandwidth selection for cervical and colon can-
terion was available to tune the bandwidth (h) and number cer data sets for fixed number of components. To also
of components (k) for KPCA. Typically these parameters retain the optimum number of components of KPCA, we
are selected by trial and error. Analogue to least squares modify Eq. 5 as follows:
cross validation [32,33] in kernel density estimation, we k N
propose a new data driven selection criterion for KPCA. 1 (−j)
J(h, k) = argmax |zn (x)|dx (6)
Let +
h∈R ,k∈N
N
0 0 n=1 j=1
zn (x) = i=1N
αi(n) K (xi , x) where k = 1, . . . , N. Figure 2 illustrate the proposed
model. Figure 3 shows the surface plot of Eq. 6 for various
x −x 2
where K (xi , xj ) = exp − i2h2j (RBF kernel with band- values of h and k.
width h) and set the target equal to 0 and denote by zn (x) Thus, the proposed data-driven model can obtain the
the score variable of sample x on nth eigenvector α (n) . optimal bandwidth for KPCA, while retaining minimum
Here, the score variables are expressed in terms of ker- number of eigenvectors which capture the majority of the
nel expressions in which every training point contributes. variance of the data. Figure 4 shows a slice of the surface
These expansions are typically dense (nonsparse). In plots. The values of the proposed criterion were re-scaled
Equation 3, the KPCA uses L2 lose function. Here we have to be maximum 1. The parameters that maximize Eq. 6
chosen the L1 loss function to induce sparsness in KPCA. are h = 70.71 and k = 5 for cervical cancer data and h =
By extending the formulation in Equation 3 to L1 loss 43.59 and k = 15 for colon cancer data.
Thomas et al. BMC Bioinformatics 2014, 15:137 Page 5 of 12
http://www.biomedcentral.com/1471-2105/15/137
0.08 0.8
0.06 0.6
\tex[t][t]{J(h)}
\tex[t][t]{J(h)}
0.04 0.4
0.02 0.2
0 0
0 50 100 150 200 0 20 40 60 80 100
\tex[t][t]{h} \tex[t][t]{h}
a b
Figure 1 Bandwidth selection of KPCA for a fixed number of components. Retaining (a) 5 components for cervical cancer data set (b) 15
components for colon cancer data set.
Figure 2 Data-Driven Bandwidth Selection for KPCA Leave-one-out cross validation (LOOCV) for KPCA.
Thomas et al. BMC Bioinformatics 2014, 15:137 Page 6 of 12
http://www.biomedcentral.com/1471-2105/15/137
0.2
\tex[t][t]{$J(h,k)$}
1
\tex[t][t]{J(h,k)}
0.15
0.1
0.5
0.05
0 0 50
200 100
20
100 10 50
0 0
\tex[t][t]{$h$} \tex[t][t]{$k$} \tex[t][t]{$h$} 0 0 \tex[t][t]{$k$}
a b
Figure 3 Model selection for KPCA-optimal bandwidth and number of components. (a) Cervical cancer (b) Colon cancer.
method with classical PCA and an existing tuning algo- optimal number of components were selected by slightly
rithm for RBF-KPCA developed by Pochet et al. [17]. modifying the Equation 6, i.e., which performed only for
Later, with the intention to comprehensively compare the components k as follows:
PCA+LS-SVM and KPCA+LS-SVM with other classifica-
k N
1 (−j)
tion methods, we applied four widely used classifiers to
the microarray data, being LS-SVM on whole data sets, t- J(k) = argmax znpca (x) dx (8)
test + LS-SVM, PAM and Lasso. To fairly compare kernel k∈N0 N n=1 j=1
functions of the LS-SVM classifier; linear, RBF and poly-
nomial kernel functions are used (in Table 2 referred to where znpca (x) the score corresponding to the varibale x on
as linear/poly/RBF). The average test accuracies and exe- PCA problem. (See Equation 1).
cution time for all these methods when applied to the 9 Figure 5 shows the plots of the optimal components
case studies are shown in Table 2 and Table 3 respectively. selection of PCA. Thus we retained 13 components and
Statistical significance test results (two-sided signed rank 15 components for cervical and colon cancer respectively
test) are given in Table 4 which compares the performance for PCA. Similarly, we obtained number of components of
of KPCA with other classifiers. For all these methods, PCA and the number of components with corresponding
training on 2/3rd of the samples and testing on 1/3rd of bandwidth for KPCA for the remaining data sets.
the samples was repeated 30 times. The score variables (projection of samples onto the
direction of selected principal components) are used to
Comparison between the proposed criterion and PCA develop an LS-SVM classification model. The averaged
For each data set, the proposed methodology is applied. test AUC values over the 30 random repetitions were
This methodology consists of two steps. First, Eq. 6 is reported.
maximized in order to obtain an optimal bandwidth h The main goal of PCA is the reduction of dimension-
and corresponding number of components k. Second, the ality, that is, focusing on a few principal components
reduced data set is used to perform a classification task (PC) versus many variables. There are several criteria
with LS-SVM. We retained 5 and 15 components respec- have been proposed for determining how many PC should
tively for cervical and colon cancer data sets. For PCA, the be investigated and how many should be ignored. One
1
\tex[B][B]{Rescaled Criteria}
1
\tex[t][t]{Rescaled Criteria}
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 5 10 15 20 0 10 20 30 40 50
\tex[t][t]{Number of components $k$} \tex[t][t]{Number of components $k$}
a b
Figure 4 Slice plot for the Model selection for KPCA for the optimal bandwidth. (a) Cervical cancer (b) Colon cancer.
Thomas et al. BMC Bioinformatics 2014, 15:137 Page 7 of 12
http://www.biomedcentral.com/1471-2105/15/137
common criteria is to include all those PCs up to a prede- Comparison between the proposed criterion and an existing
termined total percent variance explained, such as, 95%. optimization algorithm for RBF-KPCA
Figure 6 depicts the prediction performances on colon We selected two experiments from Pochet et al. [17] (last
cancer data, with PCA+LS-SVM(RBF), at different frac- two data sets in Table 1), being high-grade glioma and
tions of explained total variance. It shows the results breast cancer II data sets. We repeated the same experi-
vary with the selected components. Here the number of ments as reported in Pochet et al. [17] and compared with
retained components, depends on the chosen fraction of the proposed strategy. The results are shown in Table 5.
explained total variance. The proposed approach offers a The three dimensional surface plot of LOOCV perfor-
data-driven selection criterion for PCA problem, instead mance of the method proposed by [17] for the high-grade
of a traditional trial and error PC selection. glioma data set is shown in Figure 7, with the optimal
Thomas et al. BMC Bioinformatics 2014, 15:137 Page 8 of 12
http://www.biomedcentral.com/1471-2105/15/137
Table 4 Statistical significance test which compares KPCA with other classifiers: whole data, PCA, t-test, PAM and Lasso
Kernel function Data set I II III IV V VI VII VIII IX
Whole data 1.0000 1.0000 0.9250 0.0015 0.5750 0.0400 0.0628 0.0200 0.0150
PCA 0.0050 0.0021 0.0003 0.0015 2.83E-08 5.00E-07 0.0250 0.0005 0.0140
RBF t-test 1.0000 1.0000 1.0000 1.0000 6.50E-04 4.35E-04 0.0110 0.0005 1.0000
PAM 1.0000 6.10E-05 0.0002 0.0800 0.1450 0.0462 1.0000 0.0002 0.0015
Lasso 0.0278 1.000 0.0001 0.0498 1.0000 0.0015 1.0000 0.00003 0.0200
Whole data 1.0000 0.3095 1.0000 1.0000 1.0000 1.0000 1.0000 0.0009 1.0000
PCA 7.00E-05 0.0011 1.30E-09 7.70E-09 1.28E-08 2.72E-05 6.15E-07 0.357 0.230
lin t-test 1.0000 0.2150 0.7200 1.0000 0.0559 0.0443 1.0000 0.5450 1.0000
PAM 0.0400 0.0003 0.0422 0.0015 0.0004 0.0001 0.0015 1.0000 0.0300
Lasso 0.4950 0.4950 0.0049 2.12E-06 0.0005 0.0493 0.0025 1.0000 2.12E-06
Whole data 1.0000 0.0100 1.0000 4.16E-11 0.00450 5.90E-08 7.70E-08 1.0000 1.0000
PCA 0.0130 0.0003 4.35E-07 4.50E-05 7.70E-08 0.0040 3.28E-08 2.72E-05 5.00E-11
poly t-test 1.0000 1.0000 0.0250 1.0000 0.0443 0.2100 1.0000 0.0005 1.0000
PAM 0.1200 0.0005 0.0100 0.0400 0.0300 1.0000 0.0015 0.0200 0.0650
Lasso 0.0100 1.0000 4.61E-05 1.76E-08 0.5000 1.0000 0.0006 0.0010 0.4350
P-values of two-sided signed test are given.
p-value: False Discovery Rate (FDR) corrected.
1 1
\tex[B][B]{Rescaled Criteria}
\tex[B][B]{Rescaled Criteria}
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 5 10 15 20 0 10 20 30 40 50
\tex[t][t]{Number of components $k$} \tex[t][t]{Number of components $k$}
a b
Figure 5 Plot for the selection of optimal number of components for PCA. (a) Cervical cancer (b) Colon cancer.
Thomas et al. BMC Bioinformatics 2014, 15:137 Page 9 of 12
http://www.biomedcentral.com/1471-2105/15/137
0.9
15 components
selected by proposed
0.85 criterion for PCA which
is 65% − of variance
explained
0.8
averaged test AUC
0.75
0.7
0.65
0.6
0.55
30 40 50 60 70 80 90 100
% − of variance explained
Figure 6 The prediction performances on colon cancer data, with PCA+LS-SVM(RBF). Number of selected components depends on the
chosen fraction of explained total variance.
0.9
0.8
\tex[t][t]{$LOO−CV$}
0.7
0.6
0.5
0.4
250
200 20
18
150 16
14
100 12
10
8
50 6
\tex[b][b]{$h$} 4
\tex[b][b]{$k$}
2
0 0
Figure 7 LOOCV performance of optimization algorithm [17] on high-grade glioma data set.
value contains the non-outcome-predictive genes (ie, false in overfitting. The proposed parameter selection criterion
positive genes) in many cases [9]. of KPCA with RBF kernel, often results in test set per-
The test AUC on all nine case studies shows that KPCA formances (see Table 4) that is better than using KPCA
performs better than classical PCA. But the parameters of with a linear kernel, which reported in Pochet et al. It
KPCA need to be optimized. Here we have used LOOCV means that LOOCV in the proposed parameter selection
approach for parameters selection (bandwidth and num- criterion does not encounter an overfitting for KPCA with
ber of components) of KPCA. In the optimization algo- RBF kernel function. In addition, the optimization algo-
rithm proposed by Pochet et al. [17], the combination of rithm proposed by Pochet et al. is completely coupled with
KPCA with RBF kernel followed by FDA tends to result the subsequent classifier and thus it appears to be very
time-consuming.
In combination with classification methods, microarray
Table 6 Summary of the range (minimum to maximum) of
data analysis can be useful to guide clinical management
features selected over 30 iterations
in cancer studies. In this study, several mathematical and
Data set t-test (p < 0.05) PAM Lasso
statistical techniques were evaluated and compared in
1: Colon 197-323 15-373 8-36 order to optimize the performance of clinical predictions
2: Breast 993-1124 13-4718 7-87 based on microarray data. Considering the possibility of
3: Pancreatic 2713-4855 3-1514 12-112 increasing size and complexity of microarray data sets
4: Cervical 5858-6756 2-10692 5-67
in future, dimensionality reduction and nonlinear tech-
niques have its own significance. In many cases, in a
5: Leukemia 1089-2654 137-11453 2-69
specific application context the best feature set is still
6: Ovarian 7341-7841 34-278 62-132 important (e.g. drug discovery). While considering the
7: Head and neck stability and performance (both accuracy and execution
squamous
time) of classifiers, the proposed methodology has its own
cell carcinoma 307-831 1-12625 3-35 importance to predict classes, of future samples of known
8: Duchenne 973-2031 129-22283 8-24 disease cases.
muscular dystrophy Finally this work could be extended further to uncover
9: HIV encephalitis 941-1422 1-12625 1-20 key features from biological data sets. In several studies,
p-value: False Discovery Rate (FDR) corrected. KPCA have used to obtain biologically relevant features
Thomas et al. BMC Bioinformatics 2014, 15:137 Page 11 of 12
http://www.biomedcentral.com/1471-2105/15/137
such as genes [38,39] or detect the association between PFV/10/016 SymBioSys, PhD/Postdoc grants;Industrial Research fund (IOF):
multiple SNPs and disease [40]. In all these cases, one IOF/HB/13/027 Logic Insulin; Flemish Government: FWO: projects: G.0871.12N
(Neural circuits); PhD/Postdoc grants; IWT: TBM-Logic Insulin (100793), TBM
needs to address the parameter optimization of KPCA. Rectal Cancer (100783), TBM IETA (130256); PhD/Postdoc grants; Hercules
The available bandwidth selection techniques of KPCA Stichting: Hercules 3: PacBio RS, Hercules 1: The C1 single-cell auto prep
are time-consuming with high computational burden. system, BioMark HD System and IFC controllers (Fluidigm) for single-cell
analyses; iMinds Medical Information Technologies SBO 2014; VLK Stichting E.
This could be resolved with the proposed data-driven van der Schueren: rectal cancer;Federal Government: FOD: Cancer Plan
bandwidth selection criterion for KPCA. 2012-2015 KPC-29-023 (prostate); COST: Action: BM1104: Mass Spectrometry
Imaging. The scientific responsibility is assumed by its authors.
18. Bioinformatics research group [http://www.upo.es/eps/bigs/datasets. 39. Gao Q, He Y, Yuan Z, Zhao J, Zhang B, Xue F: Gene- or region-based
html] association study via kernel principal component analysis. BMC
19. Alon U, Barkai N, Notterman DA, Gish K, Ybarra S, Mack D, Levine AJ: Genetics 2011, 12(75):1–8.
Broad patterns of gene expression revealed by clustering analysis 40. Wu MC, Kraft P, Epstein MP, Taylor DM, Chanock SJ, Hunter DJ, Lin X:
of tumor and normal colon tissues probed by oligonucleotide Powerful SNP-set analysis for case-control genome-wide
arrays. PNAS 1999, 96(12):6745–6750. association studies. Am J Hum Genet 2010, 6(86):929–942.
20. Hess KR, Anderson K, Symmans WF, Valero V, Ibrahim N, Mejia JA, Booser
D, Theriault RL, Buzdar AU, Dempsey PJ, Rouzier R, Sneige N, Ross JS, doi:10.1186/1471-2105-15-137
Vidaurre T, Gómez HL, Hortobagyi GN, Pusztai L: Pharmacogenomic Cite this article as: Thomas et al.: New bandwidth selection criterion
predictor of sensitivity to preoperative chemotherapy with for Kernel PCA: Approach to dimensionality reduction and classification
paclitaxel and fluorouracil, doxorubicin, and cyclophosphamide in problems. BMC Bioinformatics 2014 15:137.
breast cancer. J Clin Oncol 2006, 24:4236–4244.
21. FDA-NCI clinical proteomics program databank [http://home.ccr.
cancer.gov/ncifdaproteomics/ppatterns.asp]
22. Hingorani SR, Petricoin EF, Maitra A, Rajapakse V, King C, Jacobetz MA,
Ross S, Conrads TP, Veenstra TD, Hitt BA, Kawaguchi Y, Johann D, Liotta
LA, Crawford HC, Putt ME, Jacks T, Wright CV, Hruban RH, Lowy AM,
Tuveson DA: Preinvasive and invasive ductal pancreatic cancer and
its early detection in the mouse. Cancer Cell 2003, 4(6):437–50.
23. Wong YF, Selvanayagam ZE, Wei N, Porter J: Expression genomics of
cervical cancer: molecular classification and prediction of
radiotherapy response by DNA microarray. Clin Cancer Res 2003,
9(15):5486–92.
24. Stirewalt DL, Meshinchi S, Kopecky KJ, Fan W: Identification of genes
with abnormal expression changes in acute myeloid leukemia.
Genes Chromosomes Cancer 2008, 47(1):8–20.
25. Kuriakose MA, Chen WT, He ZM, Sikora AG: Selection and validation of
differentially expressed genes in head and neck cancer. Cell Mol Life
Sci 2004, 61(11):1372–83.
26. Pescatori M, Broccolini A, Minetti C, Bertini E: Gene expression profiling
in the early phases of DMD: a constant molecular signature
characterizes DMD muscle from early postnatal life throughout
disease progression. FASEB J 2007, 21(4):1210–26.
27. Masliah E, Roberts ES, Langford D, Everall I: Patterns of gene
dysregulation in the frontal cortex of patients with HIV encephalitis.
J Neuroimmunol 2004, 157(1–2):163–75.
28. Nutt CL, Mani DR, Betensky RA, Tamayo P, Cairncross JG, Ladd U, Pohl C,
Hartmann C, McLaughlin ME, Batchelor TT, Black PM, von Deimling A,
Pomeroy SL, Golub TR, Louis DN: Gene expression-based classification
of malignant gliomas correlates better with survival than
histological classification. Cancer Res 2003, 63(7):1602–1607.
29. van’t Veer LJ, Dai H, van de Vijver MJ, He YD, Hart AAM, Mao M, Peterse HL,
van der Kooy K, Marton MJ, Witteveen AT, Schreiber GJ, Kerkhoven RM,
Roberts C, Linsley PS, Bernard R, Friend SH: Gene expression profiling
predicts clinical outcome of breast cancer. Nature 2002, 415(6871):
530–536.
30. Suykens JAK, Van Gestel T, Vandewalle J, De Moor B: A support vector
machine formulation to PCA analysis and its kernel version. IEEE
Trans Neural Netw 2003, 14(2):447–450.
31. Mercer J: Functions of positive and negative type and their
connection with the theory of integral equations. Philos Trans R Soc A
1909, 209:415–446.
32. Bowman AW: An alternative method of cross-validation for the
smoothing of density estimates. Biometrika 1984, 71:353–360.
33. Rudemo M: Empirical choice of histograms and kernel density
estimators. Scand J Statist 1982, 9:65–78.
34. Alzate C, Suykens JAK: Kernel component analysis using an Submit your next manuscript to BioMed Central
epsilon-insensitive robust loss function. IEEE Trans Neural Netw 2008, and take full advantage of:
9(19):1583–98.
35. Suykens JAK, Vandewalle J: Least squares support vector machine • Convenient online submission
classifiers. Neural Process Lett 1999, 9:293–300.
• Thorough peer review
36. De Brabanter K, Karsmakers P, Ojeda F, Alzate C, De Brabanter J,
Pelckmans K, De Moor B, Vandewalle J, Suykens JAK: LS-SVMlab toolbox • No space constraints or color figure charges
user’s guide version 1.8. Internal Report ESAT-SISTA, K.U.Leuven • Immediate publication on acceptance
(Leuven, Belgium) 2010: 10–146.
37. Verweij PJ, Houwelingen HC: Cross-validation in survival analysis. • Inclusion in PubMed, CAS, Scopus and Google Scholar
Stat Med 1993, 12:2305–14. • Research which is freely available for redistribution
38. Reverter F, Vegas E, Sánchez P: Mining gene expression profiles: an
integrated implementation of kernel principal component analysis
Submit your manuscript at
and singular value decomposition. Genomics Proteomics Bioinformatics www.biomedcentral.com/submit
2010, 3(8):200–210.