0% found this document useful (0 votes)
46 views8 pages

FVM11

This paper proposes a new approach called Feature Vector Machine (FVM) for feature selection. FVM reformulates standard Lasso regression into a form similar to support vector machines (SVM). This allows FVM to be easily extended to non-linear models using kernels defined on feature vectors. Experiments show FVM can identify a small number of dominant non-linearly correlated features, which standard Lasso fails to do. FVM generates sparse solutions in the feature space while avoiding non-convex optimization problems associated with other kernel-based feature selection methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views8 pages

FVM11

This paper proposes a new approach called Feature Vector Machine (FVM) for feature selection. FVM reformulates standard Lasso regression into a form similar to support vector machines (SVM). This allows FVM to be easily extended to non-linear models using kernels defined on feature vectors. Experiments show FVM can identify a small number of dominant non-linearly correlated features, which standard Lasso fails to do. FVM generates sparse solutions in the feature space while avoiding non-convex optimization problems associated with other kernel-based feature selection methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

From Lasso regression to Feature vector

machine

Fan Li1 , Yiming Yang1 and Eric P. Xing1,2


1 2
LTI and CALD, School of Computer Science, Carnegie Mellon University,
Pittsburgh, PA USA 15213
{hustlf,yiming,epxing}@cs.cmu.edu

Abstract

Lasso regression tends to assign zero weights to most irrelevant or redun-


dant features, and hence is a promising technique for feature selection.
Its limitation, however, is that it only offers solutions to linear models.
Kernel machines with feature scaling techniques have been studied for
feature selection with non-linear models. However, such approaches re-
quire to solve hard non-convex optimization problems. This paper pro-
poses a new approach named the Feature Vector Machine (FVM). It re-
formulates the standard Lasso regression into a form isomorphic to SVM,
and this form can be easily extended for feature selection with non-linear
models by introducing kernels defined on feature vectors. FVM gener-
ates sparse solutions in the nonlinear feature space and it is much more
tractable compared to feature scaling kernel machines. Our experiments
with FVM on simulated data show encouraging results in identifying the
small number of dominating features that are non-linearly correlated to
the response, a task the standard Lasso fails to complete.

1 Introduction

Finding a small subset of most predictive features in a high dimensional feature space is an
interesting problem with many important applications, e.g. in bioinformatics for the study
of the genome and the proteome, and in pharmacology for high throughput drug screening.
Lasso regression ([Tibshirani et al., 1996]) is often an effective technique for shrinkage and
feature selection. The loss function of Lasso regression is defined as:
X X X
L= (yi − βp xip )2 + λ ||βp ||1
i p p

where xip denotes the pth predictor (feature) in the ith datum, yi denotes the value of the
P and βp denotes the regression coefficient of the pth feature. The
response in this datum,
norm-1 regularizer p ||βp ||1 in Lasso regression typically leads to a sparse solution in the
feature space, which means that the regression coefficients for most irrelevant or redundant
features are shrunk to zero. Theoretical analysis in [Ng et al., 2003] indicates that Lasso
regression is particularly effective when there are many irrelevant features and only a few
training examples.
One of the limitations of standard Lasso regression is its assumption of linearity in the
feature space. Hence it is inadequate to capture non-linear dependencies from features to
responses (output variables). To address this limitation, [Roth, 2004] proposed “general-
ized Lasso regressions” (GLR) by introducing kernels. In GLR, the loss function is defined
as X X X
L= (yi − αj k(xi , xj ))2 + λ ||αi ||1
i j i
where αj can be regarded as the regression coefficient corresponding to the jth basis in an
instance space (more precisely, a kernel space with its basis defined on all examples), and
k(xi , xj ) represents some kernel function over the “argument” instance x i and the “basis”
instance xj . The non-linearity can be captured by a non-linear kernel. This loss function
typically yields a sparse solution in the instance space, but not in feature space where data
was originally represented. Thus GLR does not lead to compression of data in the feature
space.
[Weston et al., 2000], [Canu et al., 2002] and [Krishnapuram et al., 2003] addressed the
limitation from a different angle. They introduced feature scaling kernels in the form of:
Kθ (xi , xj ) = φ(xi ∗ θ)φ(xj ∗ θ) = K(xi ∗ θ, xj ∗ θ)
where xi ∗ θ denotes the component-wise product between two vectors: x i ∗ θ =
(xi1 θ1 , ..., xip θp ). For example, [Krishnapuram et al., 2003] used a feature scaling poly-
nomial kernel: X
Kγ (xi , xj ) = (1 + γp xip xjp )k ,
p
where γp = θp2 . With a norm-1 or norm-0 penalizer on γ in the loss function of a fea-
ture scaling kernel machine, a sparse solution is supposed to identify the most influential
features. Notice that in this formalism the feature scaling vector θ is inside the kernel func-
tion, which means that the solution space of θ could be non-convex. Thus, estimating θ
in feature scaling kernel machines is a much harder problem than the convex optimization
problem in conventional SVM of which the weight parameters to be estimated are outside
of the kernel functions.
What we are seeking for here is an alternative approach that guarantees a sparse solution
in the feature space, that is sufficient for capturing both linear and non-linear relationships
between features and the response variable, and that does not involve parameter optimiza-
tion inside of kernel functions. The last property is particularly desirable in the sense that
it will allow us to leverage many existing works in kernel machines which have been very
successful in SVM-related research.
We propose a new approach where the key idea is to re-formulate and extend Lasso re-
gression into a form that is similar to SVM except that it generates a sparse solution in the
feature space rather than in the instance space. We call our newly formulated and extended
Lasso regression ”Feature Vector Machine” (FVM). We will show (in Section 2) that FVM
has many interesting properties that mirror SVM. The concepts of support vectors, kernels
and slack variables can be easily adapted in FVM. Most importantly, all the parameters we
need to estimate for FVM are outside of the kernel functions, ensuring the convexity of the
solution space, which is the same as in SVM. 1 When a linear kernel is put to use with no
slack variables, FVM reduces to the standard Lasso regression.
1
Notice that we can not only use FVM to select important features from training data, but also use
it to predict the values of response variables for test data (see section 5). We have shown that we only
need convex optimization in the training phase of FVM. In the test phase, FVM makes a prediction for
each test example independently. This only involves with a one-dimensional optimization problem
with respect to the response variable for the test example. Although the optimization in the test phase
may be non-convex, it will be relatively easy to solve because it is only one-dimensional. This is the
price we pay for avoiding the high dimensional non-convex optimization in the training phase, which
may involve thousands of model parameters.
We notice that [Hochreiter et al., 2004] has recently developed an interesting feature selec-
tion technique named ”potential SVM”, which has the same form as the basic version of
FVM (with linear kernel and no slack variables). However, they did not explore the rela-
tionship between ”potential SVM” and Lasso regression. Furthermore, their method does
not work for feature selection tasks with non-linear models since they did not introduce the
concepts of kernels defined on feature vectors.
In section 2, we analyze some geometric similarities between the solution hyper-planes in
the standard Lasso regression and in SVM. In section 3, we re-formulate Lasso regression
in a SVM style form. In this form, all the operations on the training data can be expressed
by dot products between feature vectors. In section 4, we introduce kernels (defined for
feature vectors) to FVM so that it can be used for feature selection with non-linear models.
In section 5, we give some discussions on FVM. In section 6, we conduct experiments and
in section 7 we give conclusions.

2 Geometric parity between the solution hyper-planes of Lasso


regression and SVM
Formally, let X = [x1 , . . . , xN ] denote a sample matrix, where each column xi =
(x1 , . . . , xK )T represents a sample vector defined on K features. A feature vector can be
defined as a transposed row in the sample matrix, i.e., fq = (x1q , . . . , xN q )T (correspond-
ing to the q row of X). Note that we can write XT = [f1 , . . . , fK ] = F. For convenience,
let y = (y1 , . . . , yn )T denote a response vector containing the responses corresponding to
all the samples.
Now consider an example space of which each basis is represented by an x i in our sample
matrix (note that this is different from the space “spanned” by the sample vectors). Under
the example space, both the features fq and the response vector y can be regarded as a
point in this space. It can be shown that the solution of Lasso regression has a very in-
tuitive meaning in the example space: the regression coefficients can be regarded as the
weights of feature vectors in the example space; moreover, all the non-zero weighted fea-
ture vectors are on two parallel hyper-planes in the example space. These feature vectors,
together with the response variable, determine the directions of these two hyper-planes.
This geometric view can be drawn from the following recast of the Lasso regression due
to [Perkins et al., 2003]:
X X λ
| (yi − βp xip )xiq | ≤ , ∀q
i p
2
λ
⇒ |fq (y − [f1 , . . . , fK ]β)| ≤ , ∀q. (1)
2
It is apparent from the above equation that y − [f1 , . . . , fK ]β defines the orientation of a
separation hyper-plane. It can be shown that equality only holds for non-zero weighted
features, and all the zero weighted feature vectors are between the hyper-planes with λ/2
margin (Fig. 1a).
The separating hyper-planes due to (hard, linear) SVM have similar properties as those of
the regression hyper-planes described above, although the former are now defined in the
feature space (in which each axis represents a feature and each point represents a sample)
instead of the example space. In an SVM, all the non-zero weighted samples are also on
the two λ/2-margin separating hyper-planes (as is the case in Lasso regression), whereas
all the zero-weighted samples are now outside the pair of hyper-planes (Fig 1b). It’s well
known that the classification hyper-planes in SVM can be extended to hyper-surfaces by
introducing kernels defined for example vectors. In this way, SVM can model non-linear
dependencies between samples and the classification boundary. Given the similarity of the
X1 feature a X3
response variable X5
feature a X1
feature f
feature b
feature e
feature b
X2
feature c
X4 X2
feature d X8
X6
(a) (b)
Figure 1: Lasso regression vs. SVM. (a) The solution of Lasso regression in the example
space. X1 and X2 represent two examples. Only feature a and d have non-zero weights,
and hence the support features. (b)The solution of SVM in the feature space. Sample X1,
X3 and X5 are in one class and X2, X4, X6 and X8 are in the other. X1 and X2 are the
support vectors (i.e., with non-zero weights).
geometric structures of Lasso regression and SVM, it is nature to pursue in parallel how
one can apply similar “kernel tricks” to the feature vectors in Lasso regression, so that its
feature selection power can be extended to non-linear models. This is the intension of this
paper, and we envisage full leverage of much of the computational/optimization techniques
well-developed in the SVM community in our task.

3 A re-formulation of Lasso regression akin to SVM

[Hochreiter et al., 2004] have proposed a ”potential SVM” as follows:


(
1
βp xip )2
P P
minβ (
2
P i pP (2)
s.t. | i (yi − p βp xip )xiq | ≤ λ2 ∀q.

To clean up a little bit, we rewrite Eq. (2) in linear algebra format:


(
1 T T 2
minβ 2 k[f1 , . . . , fK ]βk
(3)
s.t. |fq (y − [f1 , . . . , fK ]β)| ≤ λ2 , ∀q.

A quick eyeballing of this formulation reveals that it shares the same constrain function
needed to be satisfied in Lasso regression. Unfortunately, this connection was not further
explored in [Hochreiter et al., 2004], e.g., to relate the objection function to that of the
Lasso regression, and to extend the objective function using kernel tricks in a way similar
to SVM. Here we show that the solution to Eq. (2) is exactly the same as that of a standard
Lasso regression. In other words, Lasso regression can be re-formulated as Eq. (2). Then,
based on this re-formulation, we show how to introduce kernels to allow feature selection
under a non-linear Lasso regression. We refer to the optimization problem defined by Eq.
(3), and its kernelized extensions, as feature vector machine (FVM).
Proposition 1: For a Lasso regression problem minβ i ( p xip βp − yi )2 + λ p |βp |,
P P P

if we have β such that: if βq = 0, then | i ( p βp xip − yi )xiq | < λ2 ; if βq < 0, then


P P
λ λ
P P P P
i( p βp xip − yi )xiq = 2 ; and if βq > 0, then i( p βp xip − yi )xiq = − 2 , then
β is the solution of the Lasso regression defined above. For convenience, we refer to the
aforementioned three conditions on β as the Lasso sandwich.
Proof: see [Perkins et al., 2003].
Proposition 2: For Problem (3), its solution β satisfies the Lasso sandwich
Sketch of proof: Following the equivalence between feature matrix F and sample matrix
X (see the begin of §2), Problem (3) can be re-written as:

1 T 2
2 ||X β||

 minβ
s.t. X(X β − y) − λ2 e ≤ 0
T , (4)
X(X T β − y) + λ2 e ≥ 0

where e is a one-vector of K dimensions. Following the standard constrained optimization


procedure, we can derive the dual of this optimization problem. The Lagrange L is given
by
1 λ λ
L = β T XX T β − αT+ (X(X T β − y) + e) + αT− (X(X T β − y) + e)
2 2 2
where α+ and α− are K × 1 vectors with positive elements. The optimizer satisfies:
∇β L = XX T β − XX T (α+ − α− ) = 0
Suppose the data matrix X has been pre-processed so that the feature vectors are centered
and normalized. In this case the elements of XX T reflect the correlation coefficients of
feature pairs and XX T is non-singular. Thus we know β = α+ − α− is the solution of
this loss function. For any element βq > 0, obviously α+q should be larger than zero.
From the KKT condition, we know i (yi − p βp xip )xiq = − λ2 holds at this time. For
P P
P
the same reason we can get when βq < 0, α−q should be larger than zero thus i (yi −
P λ
p βp xip )xiq = 2 holds. When βq = 0, α+q and α−q must both be zero (it’s easy to
see
P they can P not be both non-zero from
P KKT P condition), thus from KKT condition, both
λ λ
i (yi − p βp xip )xiq > − 2 and i (yi − p βp xip )xiq < 2 hold now, which means
| i (yi − p βp xip )xiq | < λ2 at this time.
P P

Theorem 3: Problem (3) ≡ Lasso regression.


Proof. Follows from proposition 1 and proposition 2.

4 Feature kernels
In many cases, the dependencies between feature vectors are non-linear. Analogous to the
SVM, here we introduce kernels that capture such non-linearity. Note that unlike SVM, our
kernels are defined on feature vectors instead of the sampled vectors (i.e., the rows rather
than the columns in the data matrix). Such kernels can also allow us to easily incorporate
certain domain knowledge into the classifier.
Suppose that two feature vectors fp and fq have a non-linear dependency relationship. In
the absence of linear interaction between fp and fq in the the original space, we assume
that they can be mapped to some (higher dimensional, possibly infinite-dimensional) space
via transformation φ(·), so that φ(fq ) and φ(fq ) interact linearly, i.e., via a dot product
φ(fp )T φ(fq ). We introduce kernel K(fq , fp ) = φ(fp )T φ(fq ) to represent the outcome of
this operation.
Replacing f with φ(f ) in Problem (3), we have
(
1
P
minβ 2 p,q βp βq K(fp , fp )
P λ (5)
s.t. ∀q, | p βp K(fq , fp ) − K(fq , y)| ≤ 2

Now, in Problem 5, we no longer have φ(·), which means we do not have to work in
the transformed feature space, which could be high or infinite dimensional, to capture non-
linearity of features. The kernel K(·, ·) can be any symmetric semi-positive definite matrix.
When domain knowledge from experts is available, it can be incorporated into the choice
of kernel (e.g., based on the distribution of feature values). When domain knowledge is not
available, we can use some general kernels that can detect non-linear dependencies without
any distribution assumptions. In the following we give one such example.
One possible kernel is the mutual information [Cover et al., 1991] between two feature
vectors: K(fp , fq ) = M I(fp , fq ). This kernel requires a pre-processing step to discritize
the elements of features vectors because they are continuous in general. In this paper, we
discritize the continuous variables according to their ranks in different examples. Suppose
we have N examples in total. Then for each feature, we sort its values in these N examples.
The first m values (the smallest m values) are assigned a scale 1. The m + 1 to 2m
values are assigned a scale 2. This process is iterated until all the values are assigned with
corresponding scales. It’s easy to see that in this way, we can guarantee that for any two
features p and q, K(fp , fp ) = K(fq , fq ), which means the feature vectors are normalized
and have the same length in the φ space (residing on a unit sphere centered at the origin).
Mutual information kernels have several good properties. For example, it is symmetric
(i.e., K(fp , fq ) = K(fq , fp ), non-negative, and can be normalized. It also has intuitive
interpretation related to the redundancy between features. Therefore, a non-linear feature
selection using generalized Lasso regression with this kernel yields human interpretable
results.

5 Some extensions and discussions about FVM

As we have shown, FVM is a straightforward feature selection algorithm for nonlinear fea-
tures captured in a kernel; and the selection can be easily done by solving a standard SVM
problem in the feature space, which yield an optimal vector β of which most elements are
zero. It turns out that the same procedure also seemlessly leads to a Lasso-style regularized
nonlinear regression capable of predicting the response given data in the original space.
In the prediction phase, all we have to do is to keep the trained β fixed, and turn the
optimization problem (5) into an analogous one that optimizes over the response y. Specif-
ically, given a new sample xt of unknown response, our sample matrix X grows by one
column X → [X, xt ], which means all our feature vectors gets one more dimension. We
denote the newly elongated features by F 0 = {fq0 }q∈A (note that A is the pruned index
set corresponding to features whose weight βq is non-zero). Let y 0 denote the elongated
response vector due to the newly given sample: y 0 = (y1 , ..., yN , yt )T , it can be shown that
the optimum response yt can be obtained by solving the following optimization problem 2 :
X
minyt K(y 0 , y 0 ) − 2 βp K(y 0 , fp0 ) (6)
p∈A

When we replace the kernel function K with a linear dot product, FVM reduces P to Lasso
regression. Indeed, in this special case, it is easy to see from Eq. (6) that y t = p∈A βp xtp ,
which is exactly how Lasso regression would predict the response. In this case one predicts
yt according to β and xt without using the training data X. However, when a more complex
kernel is used, solving Eq. (6) is not always trivial. In general, to predict y t , we need not
only xt and β, but also the non-zero weight features extracted from the training data.
2
For simplicity we omit details here, but as a rough sketch, note that Eq. (5) can be reformed as
X X
minβ ||φ(y 0 ) − βp φ(fp0 )||2 + ||βp ||1 .
p p

Replacing the opt. argument β with y and dropping terms irrelevant to yt , we will arrive at Eq. (6).
As in SVM, we can introduce slack variables into FVM to define a “soft” feature surface.
But due to space limitation, we omit details here. Essentially, most of the methodologies
developed for SVM can be easily adapted to FVM for nonlinear feature selection.

6 Experiments
We test FVM on a simulated dataset with 100 features and 500 examples. The response
variable y in the simulated data is generated by a highly nonlinear rule:
q
y = sin(10 ∗ f1 − 5) + 4 ∗ 1 − f22 − 3 ∗ f3 + ξ.

Here feature f1 and f3 are random variables following a uniform distribution in [0, 1];
feature f2 is a random variable uniformly distributed in [−1, 1]; and ξ represents Gaussian
noise. The other 97 features f4 , f5 , ..., f100 are conditionally independent of y given the
three features f1 , f2 and f3 . In particular, f4 , ..., f33 are all generated by the rule fj =
3 ∗ f1 + ξ; f34 , ..., f72 are all generated by the rule fj = sin(10 ∗ f2) + ξ; and the remaining
features (f73 , ..., f100 ) simply follow a uniform distribution in [0, 1]. Fig. 2 shows our data
projected in a space spanned by f1 and f2 and y.
We use a mutual information kernel for our FVM. For each feature, we sort its value in
different examples and use the rank to discritize these values into 10 scales (thus each scale
corresponds to 50 data points). An FVM can be solved by quadratic programming, but
more efficient solutions exist. [Perkins et al., 2003] has proposed a fast grafting algorithm
to solve Lasso regression, which is a special case of FVM when linear kernel is used. In our
implementation, we extend the idea of fast grafting algorithm to FVM withPmore general
kernels. The only difference is that, each time when we need to calculate i xpi xqi , we
calculate K(fp , fq ) instead. We found that fast grafting algorithm is very efficient in our
case because it uses the sparse property of the solution of FVM.
We apply both standard Lasso regression and FVM with mutual information kernel on this
dataset. The value of the regularization parameter λ can be tuned to control the number
of non-zero weighted features. In our experiment, we tried two choices of the λ, for both
FVM and the standard Lasso regression. In one case, we set λ such that only 3 non-zero
weighted features are selected; in another case, we relaxed a bit and allowed 10 features.
The results are very encouraging. As shown in Fig. (3), under stringent λ, FVM suc-
cessfully identified the three correct features, f1 , f2 and f3 , whereas Lasso regression has
missed f1 and f2 , which are non-linearly correlated with y. Even when λ was relaxed,
Lasso regression still missed the right features, whereas FVM was very robust.
6

5
response variable y

4
response variable y

3
6
2
5
1
4
0
3
−1
2

−2 1

−3 0 1
1
−1 0.8
0.5 0.6
−2
0
f2 −0.5
−3
1
0.4

0.2 f1
0.5
−1 0.8 1 0
0.4 0.6 −0.5 0
0 0.2 −1
f1 f2

Figure 2: The responses y and the two features f1 and f2 in our simulated data. Two graphs
from different angles are plotted to show the distribution more clearly in 3D space.

7 Conclusions
In this paper, we proposed a novel non-linear feature selection approach named FVM,
which extends standard Lasso regression by introducing kernels on feature vectors. FVM
x 10
−3 Lasso (3 features) x 10
−3 Lasso (10 features)
0 5

−0.5 0
−1
−5
Weight assigned to features −1.5
−10
−2

−2.5 −15

−3 −20
0 20 40 60 80 100 0 20 40 60 80 100

x 10
−3 FVM (3 features) FVM (10 features)
0.01

1 0.008
0.8
0.006
0.6
0.004
0.4

0.2 0.002

0 0
0 20 40 60 80 100 0 20 40 60 80 100
Feature id Feature id

Figure 3: Results of FVM and the standard Lasso regression on this dataset. The X axis
represents the feature IDs and the Y axis represents the weights assigned to features. The
two left graphs show the case when 3 features are selected by each algorithm and the two
right graphs show the case when 10 features are selected. From the down left graph, we can
see that FVM successfully identified f1 ,f2 and f3 as the three non-zero weighted features.
From the up left graph, we can see that Lasso regression missed f 1 and f2 , which are
non-linearly correlated with y. The two right graphs show similar patterns.

has many interesting properties that mirror the well-known SVM, and can therefore lever-
age many computational advantages of the latter approach. Our experiments with FVM
on highly nonlinear and noisy simulated data show encouraging results, in which it can
correctly identify the small number of dominating features that are non-linearly correlated
to the response variable, a task the standard Lasso fails to complete.

References
[Canu et al., 2002] Canu, S. and Grandvalet, Y. Adaptive Scaling for Feature Selection in SVMs
NIPS 15, 2002
[Hochreiter et al., 2004] Hochreiter, S. and Obermayer, K. Gene Selection for Microarray Data. In
Kernel Methods in Computational Biology, pp. 319-355, MIT Press, 2004.
[Krishnapuram et al., 2003] Krishnapuram, B. et al. Joint classifier and feature optimization for can-
cer diagnosis using gene expression data. The Seventh Annual International Conference on Re-
search in Computational Molecular Biology (RECOMB) 2003, ACM press, April 2003
[Ng et al., 2003] Ng, A. Feature selection, L1 vs L2 regularization, and rotational invariance. ICML
2004
[Perkins et al., 2003] Perkins, S., Lacker, K. & Theiler, J. Grafting: Fast,Incremental Feature Selec-
tion by gradient descent in function space JMLR 2003 1333-1356
[Roth, 2004] Roth, V. The Generalized LASSO. IEEE Transactions on Neural Networks (2004), Vol.
15, NO. 1.
[Tibshirani et al., 1996] Tibshirani, R. Optimal Reinsertion:Regression shrinkage and selection via
the lasso. J.R.Statist. Soc. B(1996), 58,No.1, 267-288
[Cover et al., 1991] Cover, TM. and Thomas, JA. Elements in Information Theory. New York: John
Wiley & Sons Inc (1991).
[Weston et al., 2000] Weston, J., Mukherjee, S., Chapelle, O., Pontil, M., Poggio, T. and Vapnik V.
Feature Selection for SVMs NIPS 13, 2000

You might also like