FVM11
FVM11
machine
Abstract
1 Introduction
Finding a small subset of most predictive features in a high dimensional feature space is an
interesting problem with many important applications, e.g. in bioinformatics for the study
of the genome and the proteome, and in pharmacology for high throughput drug screening.
Lasso regression ([Tibshirani et al., 1996]) is often an effective technique for shrinkage and
feature selection. The loss function of Lasso regression is defined as:
X X X
L= (yi − βp xip )2 + λ ||βp ||1
i p p
where xip denotes the pth predictor (feature) in the ith datum, yi denotes the value of the
P and βp denotes the regression coefficient of the pth feature. The
response in this datum,
norm-1 regularizer p ||βp ||1 in Lasso regression typically leads to a sparse solution in the
feature space, which means that the regression coefficients for most irrelevant or redundant
features are shrunk to zero. Theoretical analysis in [Ng et al., 2003] indicates that Lasso
regression is particularly effective when there are many irrelevant features and only a few
training examples.
One of the limitations of standard Lasso regression is its assumption of linearity in the
feature space. Hence it is inadequate to capture non-linear dependencies from features to
responses (output variables). To address this limitation, [Roth, 2004] proposed “general-
ized Lasso regressions” (GLR) by introducing kernels. In GLR, the loss function is defined
as X X X
L= (yi − αj k(xi , xj ))2 + λ ||αi ||1
i j i
where αj can be regarded as the regression coefficient corresponding to the jth basis in an
instance space (more precisely, a kernel space with its basis defined on all examples), and
k(xi , xj ) represents some kernel function over the “argument” instance x i and the “basis”
instance xj . The non-linearity can be captured by a non-linear kernel. This loss function
typically yields a sparse solution in the instance space, but not in feature space where data
was originally represented. Thus GLR does not lead to compression of data in the feature
space.
[Weston et al., 2000], [Canu et al., 2002] and [Krishnapuram et al., 2003] addressed the
limitation from a different angle. They introduced feature scaling kernels in the form of:
Kθ (xi , xj ) = φ(xi ∗ θ)φ(xj ∗ θ) = K(xi ∗ θ, xj ∗ θ)
where xi ∗ θ denotes the component-wise product between two vectors: x i ∗ θ =
(xi1 θ1 , ..., xip θp ). For example, [Krishnapuram et al., 2003] used a feature scaling poly-
nomial kernel: X
Kγ (xi , xj ) = (1 + γp xip xjp )k ,
p
where γp = θp2 . With a norm-1 or norm-0 penalizer on γ in the loss function of a fea-
ture scaling kernel machine, a sparse solution is supposed to identify the most influential
features. Notice that in this formalism the feature scaling vector θ is inside the kernel func-
tion, which means that the solution space of θ could be non-convex. Thus, estimating θ
in feature scaling kernel machines is a much harder problem than the convex optimization
problem in conventional SVM of which the weight parameters to be estimated are outside
of the kernel functions.
What we are seeking for here is an alternative approach that guarantees a sparse solution
in the feature space, that is sufficient for capturing both linear and non-linear relationships
between features and the response variable, and that does not involve parameter optimiza-
tion inside of kernel functions. The last property is particularly desirable in the sense that
it will allow us to leverage many existing works in kernel machines which have been very
successful in SVM-related research.
We propose a new approach where the key idea is to re-formulate and extend Lasso re-
gression into a form that is similar to SVM except that it generates a sparse solution in the
feature space rather than in the instance space. We call our newly formulated and extended
Lasso regression ”Feature Vector Machine” (FVM). We will show (in Section 2) that FVM
has many interesting properties that mirror SVM. The concepts of support vectors, kernels
and slack variables can be easily adapted in FVM. Most importantly, all the parameters we
need to estimate for FVM are outside of the kernel functions, ensuring the convexity of the
solution space, which is the same as in SVM. 1 When a linear kernel is put to use with no
slack variables, FVM reduces to the standard Lasso regression.
1
Notice that we can not only use FVM to select important features from training data, but also use
it to predict the values of response variables for test data (see section 5). We have shown that we only
need convex optimization in the training phase of FVM. In the test phase, FVM makes a prediction for
each test example independently. This only involves with a one-dimensional optimization problem
with respect to the response variable for the test example. Although the optimization in the test phase
may be non-convex, it will be relatively easy to solve because it is only one-dimensional. This is the
price we pay for avoiding the high dimensional non-convex optimization in the training phase, which
may involve thousands of model parameters.
We notice that [Hochreiter et al., 2004] has recently developed an interesting feature selec-
tion technique named ”potential SVM”, which has the same form as the basic version of
FVM (with linear kernel and no slack variables). However, they did not explore the rela-
tionship between ”potential SVM” and Lasso regression. Furthermore, their method does
not work for feature selection tasks with non-linear models since they did not introduce the
concepts of kernels defined on feature vectors.
In section 2, we analyze some geometric similarities between the solution hyper-planes in
the standard Lasso regression and in SVM. In section 3, we re-formulate Lasso regression
in a SVM style form. In this form, all the operations on the training data can be expressed
by dot products between feature vectors. In section 4, we introduce kernels (defined for
feature vectors) to FVM so that it can be used for feature selection with non-linear models.
In section 5, we give some discussions on FVM. In section 6, we conduct experiments and
in section 7 we give conclusions.
A quick eyeballing of this formulation reveals that it shares the same constrain function
needed to be satisfied in Lasso regression. Unfortunately, this connection was not further
explored in [Hochreiter et al., 2004], e.g., to relate the objection function to that of the
Lasso regression, and to extend the objective function using kernel tricks in a way similar
to SVM. Here we show that the solution to Eq. (2) is exactly the same as that of a standard
Lasso regression. In other words, Lasso regression can be re-formulated as Eq. (2). Then,
based on this re-formulation, we show how to introduce kernels to allow feature selection
under a non-linear Lasso regression. We refer to the optimization problem defined by Eq.
(3), and its kernelized extensions, as feature vector machine (FVM).
Proposition 1: For a Lasso regression problem minβ i ( p xip βp − yi )2 + λ p |βp |,
P P P
1 T 2
2 ||X β||
minβ
s.t. X(X β − y) − λ2 e ≤ 0
T , (4)
X(X T β − y) + λ2 e ≥ 0
4 Feature kernels
In many cases, the dependencies between feature vectors are non-linear. Analogous to the
SVM, here we introduce kernels that capture such non-linearity. Note that unlike SVM, our
kernels are defined on feature vectors instead of the sampled vectors (i.e., the rows rather
than the columns in the data matrix). Such kernels can also allow us to easily incorporate
certain domain knowledge into the classifier.
Suppose that two feature vectors fp and fq have a non-linear dependency relationship. In
the absence of linear interaction between fp and fq in the the original space, we assume
that they can be mapped to some (higher dimensional, possibly infinite-dimensional) space
via transformation φ(·), so that φ(fq ) and φ(fq ) interact linearly, i.e., via a dot product
φ(fp )T φ(fq ). We introduce kernel K(fq , fp ) = φ(fp )T φ(fq ) to represent the outcome of
this operation.
Replacing f with φ(f ) in Problem (3), we have
(
1
P
minβ 2 p,q βp βq K(fp , fp )
P λ (5)
s.t. ∀q, | p βp K(fq , fp ) − K(fq , y)| ≤ 2
Now, in Problem 5, we no longer have φ(·), which means we do not have to work in
the transformed feature space, which could be high or infinite dimensional, to capture non-
linearity of features. The kernel K(·, ·) can be any symmetric semi-positive definite matrix.
When domain knowledge from experts is available, it can be incorporated into the choice
of kernel (e.g., based on the distribution of feature values). When domain knowledge is not
available, we can use some general kernels that can detect non-linear dependencies without
any distribution assumptions. In the following we give one such example.
One possible kernel is the mutual information [Cover et al., 1991] between two feature
vectors: K(fp , fq ) = M I(fp , fq ). This kernel requires a pre-processing step to discritize
the elements of features vectors because they are continuous in general. In this paper, we
discritize the continuous variables according to their ranks in different examples. Suppose
we have N examples in total. Then for each feature, we sort its values in these N examples.
The first m values (the smallest m values) are assigned a scale 1. The m + 1 to 2m
values are assigned a scale 2. This process is iterated until all the values are assigned with
corresponding scales. It’s easy to see that in this way, we can guarantee that for any two
features p and q, K(fp , fp ) = K(fq , fq ), which means the feature vectors are normalized
and have the same length in the φ space (residing on a unit sphere centered at the origin).
Mutual information kernels have several good properties. For example, it is symmetric
(i.e., K(fp , fq ) = K(fq , fp ), non-negative, and can be normalized. It also has intuitive
interpretation related to the redundancy between features. Therefore, a non-linear feature
selection using generalized Lasso regression with this kernel yields human interpretable
results.
As we have shown, FVM is a straightforward feature selection algorithm for nonlinear fea-
tures captured in a kernel; and the selection can be easily done by solving a standard SVM
problem in the feature space, which yield an optimal vector β of which most elements are
zero. It turns out that the same procedure also seemlessly leads to a Lasso-style regularized
nonlinear regression capable of predicting the response given data in the original space.
In the prediction phase, all we have to do is to keep the trained β fixed, and turn the
optimization problem (5) into an analogous one that optimizes over the response y. Specif-
ically, given a new sample xt of unknown response, our sample matrix X grows by one
column X → [X, xt ], which means all our feature vectors gets one more dimension. We
denote the newly elongated features by F 0 = {fq0 }q∈A (note that A is the pruned index
set corresponding to features whose weight βq is non-zero). Let y 0 denote the elongated
response vector due to the newly given sample: y 0 = (y1 , ..., yN , yt )T , it can be shown that
the optimum response yt can be obtained by solving the following optimization problem 2 :
X
minyt K(y 0 , y 0 ) − 2 βp K(y 0 , fp0 ) (6)
p∈A
When we replace the kernel function K with a linear dot product, FVM reduces P to Lasso
regression. Indeed, in this special case, it is easy to see from Eq. (6) that y t = p∈A βp xtp ,
which is exactly how Lasso regression would predict the response. In this case one predicts
yt according to β and xt without using the training data X. However, when a more complex
kernel is used, solving Eq. (6) is not always trivial. In general, to predict y t , we need not
only xt and β, but also the non-zero weight features extracted from the training data.
2
For simplicity we omit details here, but as a rough sketch, note that Eq. (5) can be reformed as
X X
minβ ||φ(y 0 ) − βp φ(fp0 )||2 + ||βp ||1 .
p p
Replacing the opt. argument β with y and dropping terms irrelevant to yt , we will arrive at Eq. (6).
As in SVM, we can introduce slack variables into FVM to define a “soft” feature surface.
But due to space limitation, we omit details here. Essentially, most of the methodologies
developed for SVM can be easily adapted to FVM for nonlinear feature selection.
6 Experiments
We test FVM on a simulated dataset with 100 features and 500 examples. The response
variable y in the simulated data is generated by a highly nonlinear rule:
q
y = sin(10 ∗ f1 − 5) + 4 ∗ 1 − f22 − 3 ∗ f3 + ξ.
Here feature f1 and f3 are random variables following a uniform distribution in [0, 1];
feature f2 is a random variable uniformly distributed in [−1, 1]; and ξ represents Gaussian
noise. The other 97 features f4 , f5 , ..., f100 are conditionally independent of y given the
three features f1 , f2 and f3 . In particular, f4 , ..., f33 are all generated by the rule fj =
3 ∗ f1 + ξ; f34 , ..., f72 are all generated by the rule fj = sin(10 ∗ f2) + ξ; and the remaining
features (f73 , ..., f100 ) simply follow a uniform distribution in [0, 1]. Fig. 2 shows our data
projected in a space spanned by f1 and f2 and y.
We use a mutual information kernel for our FVM. For each feature, we sort its value in
different examples and use the rank to discritize these values into 10 scales (thus each scale
corresponds to 50 data points). An FVM can be solved by quadratic programming, but
more efficient solutions exist. [Perkins et al., 2003] has proposed a fast grafting algorithm
to solve Lasso regression, which is a special case of FVM when linear kernel is used. In our
implementation, we extend the idea of fast grafting algorithm to FVM withPmore general
kernels. The only difference is that, each time when we need to calculate i xpi xqi , we
calculate K(fp , fq ) instead. We found that fast grafting algorithm is very efficient in our
case because it uses the sparse property of the solution of FVM.
We apply both standard Lasso regression and FVM with mutual information kernel on this
dataset. The value of the regularization parameter λ can be tuned to control the number
of non-zero weighted features. In our experiment, we tried two choices of the λ, for both
FVM and the standard Lasso regression. In one case, we set λ such that only 3 non-zero
weighted features are selected; in another case, we relaxed a bit and allowed 10 features.
The results are very encouraging. As shown in Fig. (3), under stringent λ, FVM suc-
cessfully identified the three correct features, f1 , f2 and f3 , whereas Lasso regression has
missed f1 and f2 , which are non-linearly correlated with y. Even when λ was relaxed,
Lasso regression still missed the right features, whereas FVM was very robust.
6
5
response variable y
4
response variable y
3
6
2
5
1
4
0
3
−1
2
−2 1
−3 0 1
1
−1 0.8
0.5 0.6
−2
0
f2 −0.5
−3
1
0.4
0.2 f1
0.5
−1 0.8 1 0
0.4 0.6 −0.5 0
0 0.2 −1
f1 f2
Figure 2: The responses y and the two features f1 and f2 in our simulated data. Two graphs
from different angles are plotted to show the distribution more clearly in 3D space.
7 Conclusions
In this paper, we proposed a novel non-linear feature selection approach named FVM,
which extends standard Lasso regression by introducing kernels on feature vectors. FVM
x 10
−3 Lasso (3 features) x 10
−3 Lasso (10 features)
0 5
−0.5 0
−1
−5
Weight assigned to features −1.5
−10
−2
−2.5 −15
−3 −20
0 20 40 60 80 100 0 20 40 60 80 100
x 10
−3 FVM (3 features) FVM (10 features)
0.01
1 0.008
0.8
0.006
0.6
0.004
0.4
0.2 0.002
0 0
0 20 40 60 80 100 0 20 40 60 80 100
Feature id Feature id
Figure 3: Results of FVM and the standard Lasso regression on this dataset. The X axis
represents the feature IDs and the Y axis represents the weights assigned to features. The
two left graphs show the case when 3 features are selected by each algorithm and the two
right graphs show the case when 10 features are selected. From the down left graph, we can
see that FVM successfully identified f1 ,f2 and f3 as the three non-zero weighted features.
From the up left graph, we can see that Lasso regression missed f 1 and f2 , which are
non-linearly correlated with y. The two right graphs show similar patterns.
has many interesting properties that mirror the well-known SVM, and can therefore lever-
age many computational advantages of the latter approach. Our experiments with FVM
on highly nonlinear and noisy simulated data show encouraging results, in which it can
correctly identify the small number of dominating features that are non-linearly correlated
to the response variable, a task the standard Lasso fails to complete.
References
[Canu et al., 2002] Canu, S. and Grandvalet, Y. Adaptive Scaling for Feature Selection in SVMs
NIPS 15, 2002
[Hochreiter et al., 2004] Hochreiter, S. and Obermayer, K. Gene Selection for Microarray Data. In
Kernel Methods in Computational Biology, pp. 319-355, MIT Press, 2004.
[Krishnapuram et al., 2003] Krishnapuram, B. et al. Joint classifier and feature optimization for can-
cer diagnosis using gene expression data. The Seventh Annual International Conference on Re-
search in Computational Molecular Biology (RECOMB) 2003, ACM press, April 2003
[Ng et al., 2003] Ng, A. Feature selection, L1 vs L2 regularization, and rotational invariance. ICML
2004
[Perkins et al., 2003] Perkins, S., Lacker, K. & Theiler, J. Grafting: Fast,Incremental Feature Selec-
tion by gradient descent in function space JMLR 2003 1333-1356
[Roth, 2004] Roth, V. The Generalized LASSO. IEEE Transactions on Neural Networks (2004), Vol.
15, NO. 1.
[Tibshirani et al., 1996] Tibshirani, R. Optimal Reinsertion:Regression shrinkage and selection via
the lasso. J.R.Statist. Soc. B(1996), 58,No.1, 267-288
[Cover et al., 1991] Cover, TM. and Thomas, JA. Elements in Information Theory. New York: John
Wiley & Sons Inc (1991).
[Weston et al., 2000] Weston, J., Mukherjee, S., Chapelle, O., Pontil, M., Poggio, T. and Vapnik V.
Feature Selection for SVMs NIPS 13, 2000