Fast Kernel Classifiers
Fast Kernel Classifiers
Fast Kernel Classifiers
Abstract
Very high dimensional learning systems become theoretically possible when training examples are
abundant. The computing cost then becomes the limiting factor. Any efficient learning algorithm
should at least take a brief look at each example. But should all examples be given equal attention?
This contribution proposes an empirical answer. We first present an online SVM algorithm
based on this premise. LASVM yields competitive misclassification rates after a single pass over
the training examples, outspeeding state-of-the-art SVM solvers. Then we show how active exam-
ple selection can yield faster training, higher accuracies, and simpler models, using only a fraction
of the training example labels.
1. Introduction
Electronic computers have vastly enhanced our ability to compute complicated statistical models.
Both theory and practice have adapted to take into account the essential compromise between the
number of examples and the model capacity (Vapnik, 1998). Cheap, pervasive and networked com-
puters are now enhancing our ability to collect observations to an even greater extent. Data sizes
outgrow computer speed. During the last decade, processors became 100 times faster, hard disks
became 1000 times bigger.
Very high dimensional learning systems become theoretically possible when training examples
are abundant. The computing cost then becomes the limiting factor. Any efficient learning algorithm
should at least pay a brief look at each example. But should all training examples be given equal
attention?
2005
c Antoine Bordes, Seyda Ertekin, Jason Weston, and Leon Bottou.
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
Section 2 presents kernel classifiers such as Support Vector Machines (SVM). Kernel classi-
fiers are convenient for our purposes because they clearly express their internal states in terms
of subsets of the training examples.
Section 3 proposes a novel online algorithm, LASVM, which converges to the SVM solution.
Experimental evidence on diverse data sets indicates that it reliably reaches competitive ac-
curacies after performing a single pass over the training set. It uses less memory and trains
significantly faster than state-of-the-art SVM solvers.
Section 4 investigates two criteria to select informative training examples at each iteration
instead of sequentially processing all examples. Empirical evidence shows that selecting in-
formative examples without making use of the class labels can drastically reduce the training
time and produce much more compact classifiers with equivalent or superior accuracy.
Section 5 discusses the above results and formulates theoretical questions. The simplest ques-
tion involves the convergence of these algorithms and is addressed by the appendix. Other
questions of greater importance remain open.
2. Kernel Classifiers
Early linear classifiers associate classes y = 1 to patterns x by first transforming the patterns into
feature vectors (x) and taking the sign of a linear discriminant function:
The parameters w and b are determined by running some learning algorithm on a set of training
examples (x1 , y1 ) (xn , yn ). The feature function is usually hand chosen for each particular
problem (Nilsson, 1965).
Aizerman et al. (1964) transform such linear classifiers by leveraging two theorems of the Re-
producing Kernel theory (Aronszajn, 1950).
The Representation Theorem states that many -machine learning algorithms produce parame-
ter vectors w that can be expressed as a linear combinations of the training patterns:
n
w = i (xi ).
i=1
The linear discriminant function (1) can then be written as a kernel expansion
n
y(x) = i K(x, xi ) + b, (2)
i=1
where the kernel function K(x, y) represents the dot products (x)0 (y) in feature space. This
expression is most useful when a large fraction of the coefficients i are zero. Examples such that
i 6= 0 are then called Support Vectors.
Mercers Theorem precisely states which kernel functions correspond to a dot product for some
feature space. Kernel classifiers deal with the kernel function K(x, y) without explicitly using the
1580
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
corresponding feature function (x). For instance, the well known RBF kernel K(x, y) = ekxyk
2
For very large values of the hyper-parameter C, this expression minimizes kwk2 under the constraint
that all training examples are correctly classified with a margin yi y(xi ) greater than 1. Smaller values
of C relax this constraint and produce markedly better results on noisy problems (Cortes and Vapnik,
1995).
In practice this is achieved by solving the dual of this convex optimization problem. The coef-
ficients i of the SVM kernel expansion (2) are found by defining the dual objective function
1
W () = i yi
2
i j K(xi , x j ) (4)
i i, j
i = 0
i
A i i B i
max W () with (5)
Ai = min(0,Cyi )
Bi = max(0,Cyi ).
The above formulation slightly deviates from the standard formulation (Cortes and Vapnik, 1995)
because it makes the i coefficients positive when yi = +1 and negative when yi = 1.
SVMs have been very successful and are very widely used because they reliably deliver state-
of-the-art classifiers with minimal tweaking.
Computational Cost of SVMs There are two intuitive lower bounds on the computational cost
of any algorithm able to solve the SVM QP problem for arbitrary matrices Ki j = K(xi , x j ).
1581
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
2. Simply verifying that a vector is a solution of the SVM QP problem involves computing
the gradients of W () and checking the Karush-Kuhn-Tucker optimality conditions (Vapnik,
1998). With n examples and S support vectors, this requires a number of operations propor-
tional to n S.
Few support vectors reach the upper bound C when it gets large. The cost is then dominated by
the R3 S3 . Otherwise the term n S is usually larger. The final number of support vectors therefore
is the critical component of the computational cost of the SVM QP problem.
Assume that increasingly large sets of training examples are drawn from an unknown distribu-
tion P(x, y). Let B be the error rate achieved by the best decision function (1) for that distribution.
When B > 0, Steinwart (2004) shows that the number of support vectors is asymptotically equiv-
alent to 2nB . Therefore, regardless of the exact algorithm used, the asymptotical computational
cost of solving the SVM QP problem grows at least like n2 when C is small and n3 when C gets
large. Empirical evidence shows that modern SVM solvers (Chang and Lin, 2001-2004; Collobert
and Bengio, 2001) come close to these scaling laws.
Practice however is dominated by the constant factors. When the number of examples grows,
the kernel matrix Ki j = K(xi , x j ) becomes very large and cannot be stored in memory. Kernel values
must be computed on the fly or retrieved from a cache of often accessed values. When the cost of
computing each kernel value is relatively high, the kernel cache hit rate becomes a major component
of the cost of solving the SVM QP problem (Joachims, 1999). Larger problems must be addressed
by using algorithms that access kernel values with very consistent patterns.
Section 3 proposes an Online SVM algorithm that accesses kernel values very consistently.
Because it computes the SVM optimum, this algorithm cannot improve on the n2 lower bound.
Because it is an online algorithm, early stopping strategies might give approximate solutions in
much shorter times. Section 4 suggests that this can be achieved by carefully choosing which
examples are processed at each iteration.
Before introducing the new Online SVM, let us briefly describe other existing online kernel
methods, beginning with the kernel Perceptron.
Kernel Perceptron
1) S 0,/ b 0.
2) Pick a random example (xt , yt )
3) Compute y(xt ) = iS i K(xt , xi ) + b
4) If yt y(xt ) 0 then S S {t}, t yt
5) Return to step 2.
1582
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
Such Online Learning Algorithms require very little memory because the examples are pro-
cessed one by one and can be discarded after being examined.
Iterations such that yt y(xt ) < 0 are called mistakes because they correspond to patterns mis-
classified by the perceptron decision boundary. The algorithm then modifies the decision boundary
by inserting the misclassified pattern into the kernel expansion. When a solution exists, Novikoffs
Theorem (Novikoff, 1962) states that the algorithm converges after a finite number of mistakes, or
equivalently after inserting a finite number of support vectors. Noisy data sets are more problematic.
Large Margin Kernel Perceptrons The success of Support Vector Machines has shown that large
classification margins were desirable. On the other hand, the Kernel Perceptron (Section 2.2) makes
no attempt to achieve large margins because it happily ignores training examples that are very close
to being misclassified.
Many authors have proposed to close the gap with online kernel classifiers by providing larger
margins. The Averaged Perceptron (Freund and Schapire, 1998) decision rule is the majority vote of
all the decision rules obtained after each iteration of the Kernel Perceptron algorithm. This choice
provides a bound comparable to those offered in support of SVMs. Other algorithms (Frie et al.,
1998; Gentile, 2001; Li and Long, 2002; Crammer and Singer, 2003) explicitely construct larger
margins. These algorithms modify the decision boundary whenever a training example is either
misclassified or classified with an insufficient margin. Such examples are then inserted into the
kernel expansion with a suitable coefficient. Unfortunately, this change significantly increases the
number of mistakes and therefore the number of support vectors. The increased computational cost
and the potential overfitting undermines the positive effects of the increased margin.
Kernel Perceptrons with Removal Step This is why Crammer et al. (2004) suggest an additional
step for removing support vectors from the kernel expansion (2). The Budget Perceptron performs
very nicely on relatively clean data sets.
Online kernel classifiers usually experience considerable problems with noisy data sets. Each
iteration is likely to cause a mistake because the best achievable misclassification rate for such prob-
lems is high. The number of support vectors increases very rapidly and potentially causes overfitting
and poor convergence. More sophisticated support vector removal criteria avoid this drawback (We-
ston et al., 2005). This modified algorithm outperforms all other online kernel classifiers on noisy
data sets and matches the performance of Support Vector Machines with less support vectors.
1583
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
work, LASVM relies on the traditional soft margin SVM formulation, handles noisy data sets, and
is nicely related to the SMO algorithm. Experimental evidence on multiple data sets indicates that
it reliably reaches competitive test error rates after performing a single pass over the training set. It
uses less memory and trains significantly faster than state-of-the-art SVM solvers.
if k uk 6= 0
0
(, u) = min (Bi i )/ui for all i such that ui > 0 (7)
(A j j )/u j for all j such that u j < 0.
i gi ui
= min (, u) ,
(8)
i, j ui u j Ki j
W ()
gk = = yk i K(xi , xk ) = yk y(xk ) + b. (9)
k i
Sequential Minimal Optimization Platt (1999) observes that direction search computations are
much faster when the search direction u mostly contains zero coefficients. At least two coefficients
are needed to ensure that k uk = 0. The Sequential Minimal Optimization (SMO) algorithm uses
search directions whose coefficients are all zero except for a single +1 and a single 1.
Practical implementations of the SMO algorithm (Chang and Lin, 2001-2004; Collobert and
Bengio, 2001) usually rely on a small positive tolerance > 0. They only select directions u such
that (, u) > 0 and u0 g > . This means that we can move along direction u without immediately
reaching a constraint and increase the value of W (). Such directions are defined by the so-called
-violating pair (i, j):
i < B i
1584
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
SMO Algorithm
1) Set 0 and compute the initial gradient g (equation 9)
2) Choose a -violating pair (i, j). Stop if no such pair exists.
gi g j
3) min , B i i , j A j
Kii + K j j 2Ki j
i i + , j j
gs gs (Kis K js ) s {1 . . . n}
4) Return to step (2)
The above algorithm does not specify how exactly the -violating pairs are chosen. Modern
implementations of SMO select the -violating pair (i, j) that maximizes the directional gradient u0 g.
This choice was described in the context of Optimal Hyperplanes in both (Vapnik, 1982, pages 362
364) and (Vapnik et al., 1984).
Regardless of how exactly the -violating pairs are chosen, Keerthi and Gilbert (2002) assert
that the SMO algorithm stops after a finite number of steps. This assertion is correct despite a slight
flaw in their final argument (Takahashi and Nishi, 2003).
When SMO stops, no -violating pair remain. The corresponding is called a -approximate
solution. Proposition 13 in appendix A establishes that such approximate solutions indicate the
location of the solution(s) of the SVM QP problem when the tolerance become close to zero.
1585
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
The coefficient i are assumed to be null if i / S . On the other hand, set S might contain a few
indices i such that i = 0.
The two basic operations of the Online LASVM algorithm correspond to steps 2 and 3 of the
SMO algorithm. These two operations differ from each other because they have different ways to
select -violating pairs.
The first operation, PROCESS, attempts to insert example k / S into the set of current support
vectors. In the online setting this can be used to process a new example at time t. It first adds
example k / S into S (step 1-2). Then it searches a second example in S to find the -violating pair
with maximal gradient (steps 3-4) and performs a direction search (step 5).
LASVM PROCESS(k)
1) Bail out if k S .
2) k 0 , gk yk sS s Kks , S S {k}
3) If yk = +1 then
i k , j arg minsS gs with s > As
else
j k , i arg maxsS gs with s < Bs
4) Bail out if (i, j) is not a -violating pair.
gi g j
5) min , B i i , j A j
Kii + K j j 2Ki j
i i + , j j
gs gs (Kis K js ) s S
The second operation, REPROCESS, removes some elements from S . It first searches the -
violating pair of elements of S with maximal gradient (steps 1-2), and performs a direction search
(step 3). Then it removes blatant non support vectors (step 4). Finally it computes two useful
quantities: the bias term b of the decision function (2) and the gradient of the most -violating pair
in S .
LASVM REPROCESS
1) i arg maxsS gs with s < Bs
j arg minsS gs with s > As
2) Bail out if (i, j) is not a -violating pair.
gi g j
3) min , B i i , j A j
Kii + K j j 2Ki j
i i + , j j
gs gs (Kis K js ) s S
4) i arg maxsS gs with s < Bs
j arg minsS gs with s > As
For all s S such that s = 0
If ys = 1 and gs gi then S = S {s}
If ys = +1 and gs g j then S = S {s}
5) b (gi + g j )/2 , gi g j
1586
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
Online LASVM After initializing the state variables (step 1), the Online LASVM algorithm al-
ternates PROCESS and REPROCESS a predefined number of times (step 2). Then it simplifies the
kernel expansion by running REPROCESS to remove all -violating pairs from the kernel expansion
(step 3).
LASVM
1) Initialization:
Seed S with a few examples of each class.
Set 0 and compute the initial gradient g (equation 9)
2) Online Iterations:
Repeat a predefined number of times:
- Pick an example kt
- Run PROCESS(kt ).
- Run REPROCESS once.
3) Finishing:
Repeat REPROCESS until .
LASVM can be used in the online setup where one is given a continuous stream of fresh random
examples. The online iterations process fresh training examples as they come. LASVM can also be
used as a stochastic optimization algorithm in the offline setup where the complete training set is
available before hand. Each iteration randomly picks an example from the training set.
In practice we run the LASVM online iterations in epochs. Each epoch sequentially visits all
the randomly shuffled training examples. After a predefined number P of epochs, we perform the
finishing step. A single epoch is consistent with the use of LASVM in the online setup. Multiple
epochs are consistent with the use of LASVM as a stochastic optimization algorithm in the offline
setup.
Convergence of the Online Iterations Let us first ignore the finishing step (step 3) and assume
that online iterations (step 2) are repeated indefinitely. Suppose that there are remaining -violating
pairs at iteration T .
a.) If there are -violating pairs (i, j) such that i S and j S , one of them will be exploited by
the next REPROCESS.
b.) Otherwise, if there are -violating pairs (i, j) such that i S or j S , each subsequent PRO-
CESS has a chance to exploit one of them. The intervening REPROCESS do nothing because
they bail out at step 2.
c.) Otherwise, all -violating pairs involve indices outside S . Subsequent calls to PROCESS and
REPROCESS bail out until we reach a time t > T such that kt = i and kt+1 = j for some -
violating pair (i, j). The first PROCESS then inserts i into S and bails out. The following
REPROCESS bails out immediately. Finally the second PROCESS locates pair (i, j).
This case is not important in practice. There usually is a support vector s S such that
As < s < Bs . We can then write gi g j = (gi gs ) + (gs g j ) 2 and conclude that we
already have reached a 2-approximate solution.
1587
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
The LASVM online iterations therefore work like the SMO algorithm. Remaining -violating
pairs is sooner or later exploited by either PROCESS or REPROCESS. As soon as a -approximate
solution is reached, the algorithm stops updating the coefficients . Theorem 18 in the appendix
gives more precise convergence results for this stochastic algorithm.
The finishing step (step 3) is only useful when one limits the number of online iterations. Run-
ning LASVM usually consists in performing a predefined number P of epochs and running the fin-
ishing step. Each epoch performs n online iterations by sequentially visiting the randomly shuffled
training examples. Empirical evidence suggests indeed that a single epoch yields a classifier almost
as good as the SVM solution.
Computational Cost of LASVM Both PROCESS and REPROCESS require a number of operations
proportional to the number S of support vectors in set S . Performing P epochs of online iterations
requires a number of operations proportional to n P S. The average number S of support vectors
scales no more than linearly with n because each online iteration brings at most one new support
vector. The asymptotic cost therefore grows like n2 at most. The finishing step is similar to running
a SMO solver on a SVM problem with only S training examples. We recover here the n2 to n3
behavior of standard SVM solvers.
Online algorithms access kernel values with a very specific pattern. Most of the kernel values
accessed by PROCESS and REPROCESS involve only support vectors from set S . Only PROCESS
on a new example xkt accesses S fresh kernel values K(xkt , xi ) for i S .
Implementation Details Our LASVM implementation reorders the examples after every PRO-
CESS or REPROCESS to ensure that the current support vectors come first in the reordered list
of indices. The kernel cache records truncated rows of the reordered kernel matrix. SVMLight
(Joachims, 1999) and LIBSVM (Chang and Lin, 2001-2004) also perform such reorderings, but do
so rather infrequently (Joachims, 1999). The reordering overhead is acceptable during the online
iterations because the computation of fresh kernel values takes much more time.
Reordering examples during the finishing step was more problematic. We eventually deployed
an adaptation of the shrinking heuristic (Joachims, 1999) for the finishing step only. The set S of
support vectors is split into an active set Sa and an inactive set Si . All support vectors are initially
active. The REPROCESS iterations are restricted to the active set Sa and do not perform any reorder-
ing. About every 1000 iterations, support vectors that hit the boundaries of the box constraints are
either removed from the set S of support vectors or moved from the active set Sa to the inactive set
Si . When all -violating pairs of the active set are exhausted, the inactive set examples are trans-
ferred back into the active set. The process continues as long as the merged set contains -violating
pairs.
1588
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
Figure 1: Compared test error rates for the ten Figure 2: Compared training times for the ten
MNIST binary classifiers. MNIST binary classifiers.
and the same training parameters C = 1000 and = 0.001. Unless indicated otherwise, the kernel
cache size is 256MB.
LASVM versus Sequential Minimal Optimization Baseline results were obtained by running
the state-of-the-art SMO solver LIBSVM (Chang and Lin, 2001-2004). The resulting classifier ac-
curately represents the SVM solution.
Two sets of results are reported for LASVM. The LASVM1 results were obtained by performing
a single epoch of online iterations: each training example was processed exactly once during a
single sequential sweep over the training set. The LASVM2 results were obtained by performing
two epochs of online iterations.
Figures 1 and 2 show the resulting test errors and training times. LASVM1 runs about three
times faster than LIBSVM and yields test error rates very close to the LIBSVM results. Standard
paired significance tests indicate that these small differences are not significant. LASVM2 usually
runs faster than LIBSVM and very closely tracks the LIBSVM test errors.
Neither the LASVM1 or LASVM2 experiments yield the exact SVM solution. On this data
set, LASVM reaches the exact SVM solution after about five epochs. The first two epochs represent
the bulk of the computing time. The remaining epochs run faster when the kernel cache is large
1589
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
Figure 3: Training time as a function of the Figure 4: Multiclass errors and training times
number of support vectors. for the MNIST data set.
Figure 5: Compared numbers of support vec- Figure 6: Training time variation as a func-
tors for the ten MNIST binary clas- tion of the cache size. Relative
sifiers. changes with respect to the 1GB
LIBSVM times are averaged over all
ten MNIST classifiers.
enough to hold all the dot products involving support vectors. Yet the overall optimization times are
not competitive with those achieved by LIBSVM.
Figure 3 shows the training time as a function of the final number of support vectors for the
ten binary classification problems. Both LIBSVM and LASVM1 show a linear dependency. The
Online LASVM algorithm seems more efficient overall.
1590
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
Figure 4 shows the multiclass error rates and training times obtained by combining the ten
classifiers using the well known 1-versus-rest scheme (Scholkopf and Smola, 2002). LASVM1
provides almost the same accuracy with much shorter training times. LASVM2 reproduces the
LIBSVM accuracy with slightly shorter training time.
Figure 5 shows the resulting number of support vectors. A single epoch of the Online LASVM
algorithm gathers most of the support vectors of the SVM solution computed by LIBSVM. The first
iterations of the Online LASVM might indeed ignore examples that later become support vectors.
Performing a second epoch captures most of the missing support vectors.
LASVM versus the Averaged Perceptron The computational advantage of LASVM relies on its
apparent ability to match the SVM accuracies after a single epoch. Therefore it must be compared
with algorithms such as the Averaged Perceptron (Freund and Schapire, 1998) that provably match
well known upper bounds on the SVM accuracies. The AVGPERC1 results in Figures 1 and 2 were
obtained after running a single epoch of the Averaged Perceptron. Although the computing times are
very good, the corresponding test errors are not competitive with those achieved by either LIBSVM
or LASVM. Freund and Schapire (1998) suggest that the Averaged Perceptron approaches the actual
SVM accuracies after 10 to 30 epochs. Doing so no longer provides the theoretical guarantees. The
AVGPERC10 results in Figures 1 and 2 were obtained after ten epochs. Test error rates indeed
approach the SVM results. The corresponding training times are no longer competitive.
Impact of the Kernel Cache Size These training times stress the importance of the kernel cache
size. Figure 2 shows how the AVGPERC10 runs much faster on problems 0, 1, and 6. This is hap-
pening because the cache is large enough to accomodate the dot products of all examples with all
support vectors. Each repeated iteration of the Average Perceptron requires very few additional ker-
nel evaluations. This is much less likely to happen when the training set size increases. Computing
times then increase drastically because repeated kernel evaluations become necessary.
Figure 6 compares how the LIBSVM and LASVM1 training times change with the kernel cache
size. The vertical axis reports the relative changes with respect to LIBSVM with one gigabyte of
kernel cache. These changes are averaged over the ten MNIST classifiers. The plot shows how
LASVM tolerates much smaller caches. On this problem, LASVM with a 8MB cache runs slightly
faster than LIBSVM with a 1024MB cache.
Useful orders of magnitude can be obtained by evaluating how large the kernel cache must be
to avoid the systematic recomputation of dot-products. Following the notations of Section 2.1, let n
be the number of examples, S be the number of support vectors, and R S the number of support
vectors such that 0 < |i | < C.
In the case of LIBSVM, the cache must accommodate about n R terms: the examples selected
for the SMO iterations are usually chosen among the R free support vectors. Each SMO
iteration needs n distinct dot-products for each selected example.
To perform a single LASVM epoch, the cache must only accommodate about S R terms: since
the examples are visited only once, the dot-products computed by a PROCESS operation can
only be reutilized by subsequent REPROCESS operations. The examples selected by RE-
PROCESS are usually chosen amont the R free support vectors; for each selected example,
REPROCESS needs one distinct dot-product per support vector in set S .
1591
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
To perform multiple LASVM epochs, the cache must accommodate about n S terms: the
dot-products computed by processing a particular example are reused when processing the
same example again in subsequent epochs. This also applies to multiple Averaged Perceptron
epochs.
An efficient single epoch learning algorithm is therefore very desirable when one expects S to be
much smaller than n. Unfortunately, this may not be the case when the data set is noisy. Section
3.4 presents results obtained in such less favorable conditions. Section 4 then proposes an active
learning method to contain the growth of the number of support vectors, and recover the full benefits
of the online approach.
1592
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
LIBSVM LASVM1
Data Set Error SV KCalc Time Error SV KCalc Time
Waveform 8.82% 1006 4.2M 3.2s 8.68% 948 2.2M 2.7s
Banana 9.96% 873 6.8M 9.9s 9.98% 869 6.7M 10.0s
Reuters 2.76% 1493 11.8M 24s 2.76% 1504 9.2M 31.4s
USPS 0.41% 236 1.97M 13.5s 0.43% 201 1.08M 15.9s
USPS+N 0.41% 2750 63M 305s 0.53% 2572 20M 178s
Adult 14.90% 11327 1760M 1079s 14.94% 11268 626M 809s
Forest (100k) 8.03% 43251 27569M 14598s 8.15% 41750 18939M 10310s
Forest (521k) 4.84% 124782 316750M 159443s 4.83% 122064 188744M 137183s
Figure 8: Comparison of LIBSVM versus LASVM1: Test error rates (Error), number of support
vectors (SV), number of kernel calls (KCalc), and training time (Time). Bold characters
indicate significative differences.
Relative Variation
Data Set Error SV Time
Waveform -0% -0% +4%
Banana -79% -74% +185%
Reuters 0% -0% +3%
USPS 0% -2% +0%
USPS+N% -67% -33% +7%
Adult -13% -19% +80%
Forest (100k) -1% -24% +248%
Forest (521k) -2% -24% +84%
Figure 9: Relative variations of test error, number of support vectors and training time measured
before and after the finishing step.
1593
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
The quantities gmin and gmax can be interpreted as bounds for the decision threshold b. The quantity
then represents an uncertainty on the decision threshold b.
The quantity also controls how LASVM collects potential support vectors. The definition of
PROCESS and the equality (9) indicate indeed that PROCESS(k) adds the support vector xk to the
kernel expansion if and only if
yk y(xk ) < 1 + . (11)
2
When is optimal, the uncertainty is zero, and this condition matches the Karush-Kuhn-Tucker
condition for support vectors yk y(xk ) 1.
Intuitively, relation (11) describes how PROCESS collects potential support vectors that are com-
patible with the current uncertainty level on the threshold b. Simultaneously, the REPROCESS
operations reduce and discard the support vectors that are no longer compatible with this reduced
uncertainty.
The online iterations of the LASVM algorithm make equal numbers of PROCESS and REPRO-
CESS for purely heuristic reasons. Nothing guarantees that this is the optimal proportion. The
results reported in Figure 9 clearly suggest to investigate this arbitrage more closely.
Variations on REPROCESS Experiments were carried out with a slightly modified LASVM al-
gorithm: instead of performing a single REPROCESS, the modified online iterations repeatedly run
REPROCESS until the uncertainty becomes smaller than a predefined threshold max .
Figure 10 reports comparative results for the Banana data set. Similar results were obtained
with other data sets. The three plots report test error rates, training time, and number of support
vectors as a function of max . These measurements were performed after one epoch of online it-
erations without finishing step, and after one and two epochs followed by the finishing step. The
corresponding LIBSVM figures are indicated by large triangles on the right side of the plot.
Regardless of max , the SVM test error rate can be replicated by performing two epochs followed
by a finishing step. However, this does not guarantee that the optimal SVM solution has been
reached.
Large values of max essentially correspond to the unmodified LASVM algorithm. Small values
of max considerably increases the computation time because each online iteration calls REPROCESS
many times in order to sufficiently reduce . Small values of max also remove the LASVM ability
to produce a competitive result after a single epoch followed by a finishing step. The additional
optimization effort discards support vectors more aggressively. Additional epochs are necessary to
recapture the support vectors that should have been kept.
There clearly is a sweet spot around max = 3 when one epoch of online iterations alone almost
match the SVM performance and also makes the finishing step very fast. This sweet spot is difficult
to find in general. If max is a little bit too small, we must make one extra epoch. If max is a little
1594
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
Figure 10: Impact of additional REPROCESS measured on Banana data set. During the LASVM
online iterations, calls to REPROCESS are repeated until < max .
bit too large, the algorithm behaves like the unmodified LASVM. Short of a deeper understanding
of these effects, the unmodified LASVM seems to be a robust compromise.
SimpleSVM The right side of each plot in Figure 10 corresponds to an algorithm that optimizes
the coefficient of the current support vectors at each iteration. This is closely related to the Sim-
pleSVM algorithm (Vishwanathan et al., 2003). Both LASVM and the SimpleSVM update a current
kernel expansion by adding or removing one or two support vectors at each iteration. The two key
differences are the numerical objective of these updates and their computational costs.
Whereas each SimpleSVM iteration seeks the optimal solution of the SVM QP problem re-
stricted to the current set of support vectors, the LASVM online iterations merely attempt to improve
the value of the dual objective function W (). As a a consequence, LASVM needs a finishing step
and the SimpleSVM does not. On the other hand, Figure 10 suggests that seeking the optimum
at each iteration discards support vectors too aggressively to reach competitive accuracies after a
single epoch.
Each SimpleSVM iteration updates the current kernel expansion using rank 1 matrix updates
(Cauwenberghs and Poggio, 2001) whose computational cost grows as the square of the number of
support vectors. LASVM performs these updates using SMO direction searches whose cost grows
1595
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
linearly with the number of examples. Rank 1 updates make good sense when one seeks the optimal
coefficients. On the other hand, all the kernel values involving support vectors must be stored in
memory. The LASVM direction searches are more amenable to caching strategies for kernel values.
The corresponding gradients are gk g j for positive examples and gi gk for negative examples.
Using the expression (9) of the gradients and the value of b and computed during the previous
REPROCESS (10), we can write:
gi + g j gi g j
when yk = +1, gk g j = yk gk + = 1 + yk y(xk )
2 2 2
gi + g j gi g j
when yk = 1, gi gk = + + yk gk = 1 + yk y(xk ).
2 2 2
This expression shows that the Gradient Selection Criterion simply suggests to pick the most mis-
classified example
kG = arg min yk y(xk ). (12)
/S
k
1596
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
When training example labels are unreliable, a conservative approach chooses the example kA
that yields the strongest minimax gradient:
This Active Selection Criterion simply chooses the example that comes closest to the current deci-
sion boundary. Such a choice yields a gradient approximatively equal to 1 + /2 regardless of the
true class of the example.
Criterion (13) does not depend on the labels yk . The resulting learning algorithm only uses the
labels of examples that have been selected during the previous online iterations. This is related to
the Pool Based Active Learning paradigm (Cohn et al., 1990).
Early active learning literature, also known as Experiment Design (Fedorov, 1972), contrasts
the passive learner, who observes examples (x, y), with the active learner, who constructs queries x
and observes their labels y. In this setup, the active learner cannot beat the passive learner because
he lacks information about the input pattern distribution (Eisenberg and Rivest, 1990). Pool-based
active learning algorithms observe the pattern distribution from a vast pool of unlabelled examples.
Instead of constructing queries, they incrementally select unlabelled examples xk and obtain their
labels yk from an oracle.
Several authors (Campbell et al., 2000; Schohn and Cohn, 2000; Tong and Koller, 2000) propose
incremental active learning algorithms that clearly are related to Active Selection. The initialization
consists of obtaining the labels for a small random subset of examples. A SVM is trained using
all the labelled examples as a training set. Then one searches the pool for the unlabelled example
that comes closest to the SVM decision boundary, one obtains the label of this example, retrains the
SVM and reiterates the process.
1597
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
Each online iteration of the above algorithm is about M times more computationally expen-
sive that an online iteration of the basic LASVM algorithm. Indeed one must compute the kernel
expansion (2) for M fresh examples instead of a single one (9). This cost can be reduced by heuris-
tic techniques for adapting M to the current conditions. For instance, we present experimental
results where one stops collecting new examples as soon as M contains five examples such that
| y(xs ) | < 1 + /2.
Finally the last two paragraphs of appendix A discuss the convergence of LASVM with example
selection according to the gradient selection criterion or the active selection criterion. The gradient
selection criterion always leads to a solution of the SVM problem. On the other hand, the active
selection criterion only does so when one uses the sampling method. In practice this convergence
occurs very slowly. The next section presents many reasons to prefer the intermediate kernel classi-
fiers visited by this algorithm.
1598
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
Figure 11: Comparing example selection criteria on the Adult data set. Measurements were per-
formed on 65 runs using randomly selected training sets. The graphs show the error
measured on the remaining testing examples as a function of the number of iterations
and the computing time. The dashed line represents the LIBSVM test error under the
same conditions.
Adult Data Set We first report experiments performed on the Adult data set. This data set
provides a good indication of the relative performance of the Gradient and Active selection criteria
under noisy conditions.
Reliable results were obtained by averaging experimental results measured for 65 random splits
of the full data set into training and test sets. Paired tests indicate that test error differences of 0.25%
on a single run are statistically significant at the 95% level. We conservatively estimate that average
error differences of 0.05% are meaningful.
Figure 11 reports the average error rate measured on the test set as a function of the number
of online iterations (left plot) and of the average computing time (right plot). Regardless of the
training example selection method, all reported results were measured after performing the LASVM
finishing step. More specifically, we run a predefined number of online iterations, save the LASVM
state, perform the finishing step, measure error rates and number of support vectors, and restore the
saved LASVM state before proceeding with more online iterations. Computing time includes the
duration of the online iterations and the duration of the finishing step.
The GRADIENT example selection criterion performs very poorly on this noisy data set. A
detailed analysis shows that most of the selected examples become support vectors with coefficient
reaching the upper bound C. The ACTIVE and AUTOACTIVE criteria both reach smaller test error
rates than those achieved by the SVM solution computed by LIBSVM. The error rates then seem to
increase towards the error rate of the SVM solution (left plot). We believe indeed that continued
iterations of the algorithm eventually yield the SVM solution.
Figure 12 relates error rates and numbers of support vectors. The RANDOM LASVM algorithm
performs as expected: a single pass over all training examples replicates the accuracy and the num-
1599
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
Figure 12: Comparing example selection criteria on the Adult data set. Test error as a function of
the number of support vectors.
ber of support vectors of the LIBSVM solution. Both the ACTIVE and AUTOACTIVE criteria yield
kernel classifiers with the same accuracy and much less support vectors. For instance, the AUTOAC-
TIVE LASVM algorithm reaches the accuracy of the LIBSVM solution using 2500 support vectors
instead of 11278. Figure 11 (right plot) shows that this result is achieved after 150 seconds only.
This is about one fifteenth of the time needed to perform a full RANDOM LASVM epoch.2
Both the ACTIVE LASVM and AUTOACTIVE LASVM algorithms exceed the LIBSVM accuracy
after a few iterations only. This is surprising because these algorithms only use the training labels
of the few selected examples. They both outperform the LIBSVM solution by using only a small
subset of the available training labels.
MNIST Data Set The comparatively clean MNIST data set provides a good opportunity to verify
the behavior of the various example selection criteria on a problem with a much lower error rate.
Figure 13 compares the performance of the RANDOM, GRADIENT and ACTIVE criteria on the
classification of digit 8 versus all other digits. The curves are averaged on 5 runs using different
random seeds. All runs use the standard MNIST training and test sets. Both the GRADIENT and
ACTIVE criteria perform similarly on this relatively clean data set. They require about as much
computing time as RANDOM example selection to achieve a similar test error.
Adding ten percent label noise on the MNIST training data provides additional insight regarding
the relation between noisy data and example selection criteria. Label noise was not applied to the
testing set because the resulting measurement can be readily compared to test errors achieved by
training SVMs without label noise. The expected test errors under similar label noise conditions
can be derived from the test errors measured without label noise. Figure 14 shows the test errors
achieved when 10% label noise is added to the training examples. The GRADIENT selection cri-
2. The timing results reported in Figure 8 were measured on a faster computer.
1600
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
Figure 13: Comparing example selection criteria on the MNIST data set, recognizing digit 8
against all other classes. Gradient selection and Active selection perform similarly on
this relatively noiseless task.
Figure 14: Comparing example selection criteria on the MNIST data set with 10% label noise on
the training examples.
1601
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
Figure 15: Comparing example selection criteria on the MNIST data set. Active example selection
is insensitive to the artificial label noise.
terion causes a very chaotic convergence because it keeps selecting mislabelled training examples.
The ACTIVE selection criterion is obviously undisturbed by the label noise.
Figure 15 summarizes error rates and number of support vectors for all noise conditions. In the
presence of label noise on the training data, LIBSVM yields a slightly higher test error rate, and a
much larger number of support vectors. The RANDOM LASVM algorithm replicates the LIBSVM
results after one epoch. Regardless of the noise conditions, the ACTIVE LASVM algorithm reaches
the accuracy and the number of support vectors of the LIBSVM solution obtained with clean training
data. Although we have not been able to observe it on this data set, we expect that, after a very long
time, the ACTIVE curve for the noisy training set converges to the accuracy and the number of
support vectors achieved of the LIBSVM solution obtained for the noisy training data.
1602
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
Test Error
4
2
3.5
1.5
3
1
0.5 2.5
0 2
0 100 200 300 400 0 500 1000 1500
Number of labels Number of labels
Figure 16: Comparing active learning methods on the USPS and Reuters data sets. Results are
averaged on 10 random choices of training and test sets. Using LASVM iterations instead
of retraining causes no loss of accuracy. Sampling M = 50 examples instead of searching
all examples only causes a minor loss of accuracy when the number of labeled examples
is very small.
All the active learning methods performed approximately the same, and were superior to ran-
dom selection. Using LASVM iterations instead of retraining causes no loss of accuracy. Sampling
M = 50 examples instead of searching all examples only causes a minor loss of accuracy when the
number of labeled examples is very small. Yet the speedups are very significant: for 500 queried
labels on the Reuters data set, the RETRAIN ACTIVE ALL, LASVM ACTIVE ALL, and LASVM AC-
TIVE 50 algorithms took 917 seconds, 99 seconds, and 9.6 seconds respectively.
5. Discussion
This work started because we observed that the data set sizes are quickly outgrowing the computing
power of our calculators. One possible avenue consists of harnessing the computing power of
multiple computers (Graf et al., 2005). Instead we propose learning algorithms that remain closely
related to SVMs but require less computational resources. This section discusses their practical and
theoretical implications.
1603
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
faster than a SVM (figure 2) and using much less memory than a SVM (figure 6). This is very im-
portant in practice because modern data storage devices are most effective when the data is accessed
sequentially.
Active Selection of the LASVM training examples brings two additional benefits for practical
applications. It achieves equivalent performances with significantly less support vectors, further
reducing the required time and memory. It also offers an obvious opportunity to parallelize the
search for informative examples.
1604
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
Empirical evidence suggests that a single epoch of the LASVM algorithm yields misclassifi-
cation rates comparable with a SVM. We also know that LASVM exactly reaches the SVM
solution after a sufficient number of epochs. Can we theoretically estimate the expected dif-
ference between the first epoch test error and the many epoch test error? Such results exist for
well designed online learning algorithms based on stochastic gradient descent (Murata and
Amari, 1999; Bottou and LeCun, 2005). Unfortunately these results do not directly apply to
kernel classifiers. A better understanding would certainly suggest improved algorithms.
Test error rates are sometimes improved by active example selection. In fact this effect has
already been observed in the active learning setups (Schohn and Cohn, 2000). This small
improvement is difficult to exploit in practice because it requires very sensitive early stopping
criteria. Yet it demands an explanation because it seems that one gets a better performance
by using less information. There are three potential explanations: (i) active selection works
well on unbalanced data sets because it tends to pick equal number of examples of each class
(Schohn and Cohn, 2000), (ii) active selection improves the SVM loss function because it
discards distant outliers, (iii) active selection leads to more sparse kernel expansions with
better generalization abilities (Cesa-Bianchi et al., 2005). These three explanations may be
related.
We know that the number of SVM support vectors scales linearly with the number of examples
(Steinwart, 2004). Empirical evidence suggests that active example selection yields transitory
kernel classifiers that achieve low error rates with much less support vectors. What is the
scaling law for this new number of support vectors?
What is the minimal computational cost for learning n independent examples and achieving
optimal test error rates? The answer depends of course of how we define these optimal
test error rates. This cost intuitively scales at least linearly with n because one must pay a
look at each example to fully exploit them. The present work suggest that this cost might
be smaller than n times the reduced number of support vectors achievable with the active
learning technique. This range is consistent with previous work showing that stochastic gra-
dient algorithms can train a fixed capacity model in linear time (Bottou and LeCun, 2005).
Learning seems to be much easier than computing the optimum of the empirical loss.
Section 3.5 suggests that better convergence speed could be attained by cleverly modulating
the number of calls to REPROCESS during the online iterations. Simple heuristics might go a
long way.
1605
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
Section 4.3 suggests a heuristic to adapt the sampling size for the randomized search of in-
formative training examples. This AUTOACTIVE heuristic performs very well and deserves
further investigation.
Sometimes one can generate a very large number of training examples by exploiting known
invariances. Active example selection can drive the generation of examples. This idea was
suggested in (Loosli et al., 2004) for the SimpleSVM.
6. Conclusion
This work explores various ways to speedup kernel classifiers by asking which examples deserve
more computing time. We have proposed a novel online algorithm that converges to the SVM solu-
tion. LASVM reliably reaches competitive accuracies after performing a single pass over the training
examples, outspeeding state-of-the-art SVM solvers. We have then shown how active example se-
lection can yield faster training, higher accuracies and simpler models using only a fraction of the
training examples labels.
Acknowledgments
Part of this work was funded by NSF grant CCR-0325463. We also thank Eric Cosatto, Hans-Peter
Graf, C. Lee Giles and Vladimir Vapnik for their advice and support, Ronan Collobert and Chih-
Jen Lin for thoroughly checking the mathematical appendix, and Sathiya Keerthi for pointing out
reference (Takahashi and Nishi, 2003).
(x, u) = max{ 0 | x + u F }
f (x, u) = max{ f (x + u), x + u F }.
1606
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
u0 f (x) > 0
f (x, u) > f (x)
u Dx .
Proof Assume f (x, u) > f (x). Direction u 6= 0 is feasible because the maximum f (x, u) is reached
for some 0 < (x, u). Let [0, 1]. Since set F is convex, x + u F . Since function f
is concave, f (x + u)) (1 ) f (x) + f (x, u). Writing a first order expansion when 0
yields u0 f (x) f (x, u) f (x) > 0. Conversely, assume u0 f (x) > 0 and u 6= 0 is a feasible
direction. Recall f (x + u) = f (x) + u0 f (x) + o(). Therefore we can choose 0 < 0 (x, u)
such that f (x + 0 u) > f (x) + 0 u0 f (x)/2. Therefore f (x, u) f (x + 0 u) > f (x).
Theorem 3 (Zoutendijk (1960) page 22) The following assertions are equivalent:
i) x is a solution of problem (14).
ii) u Rn f (x, u) f (x).
iii) u Dx u0 f (x) 0.
Proof The equivalence between assertions (ii) and (iii) results from proposition 2. Assume asser-
tion (i) is true. Assertion (ii) is necessarily true because f (u, x) maxF f = f (x). Conversely, as-
sume assertion (i) is false. Then there is y F such that f (y) > f (x). Therefore f (x, y x) > f (x)
and assertion (ii) is false.
Proof Let u be a positive linear combination of the vi . Since the vi are feasible directions there are
yi = x + i vi F , and u can be written as i i (yi x) with i 0. Direction u is feasible because
the convex F contains ( i yi ) / ( i ) = x + (1/ i ) u.
1607
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
Definition 5 A set of directions U Rn is a witness family for F when, for any point x F ,
any feasible direction u Dx can be expressed as a positive linear combination of a finite number
of feasible directions v j U Dx .
Proof The equivalence between assertions (ii) and (iii) results from proposition 2. Assume as-
sertion (i) is true. Theorem 3 implies that assertion (ii) is true as well. Conversely, assume asser-
tion (i) is false. Theorem 3 implies that there is a feasible direction u Rn on point x such that
u0 f (x) > 0. Since U is a witness family, there are positive coefficients 1 . . . k and feasible direc-
tions v1 , . . . , vk U Dx such that u = i vi . We have then j v0j f (x) > 0. Since all coefficients
j are positive, there is at least one term j0 such that v0j0 f (x) > 0. Assertion (iii) is therefore false.
The following proposition provides an example of witness family for the convex domain Fs that
appears in the SVM QP problem (5).
Let us recursively define a sequence of points z( j) B . We start with z(0) = x B . For each
t 0, we define two sets of coordinate indices It+ = {i | zi (t) < yi } and It = { j | z j (t) > y j }. The
recursion stops if either set is empty. Otherwise, we choose i It+ and j It and define z(t+1) =
z(t) + (t) v(t) B with v(t) = ei e j Us and (t) = min(yi zi (t), z j (t) y j ) > 0. Intuitively, we
move towards y along direction v(t) until we hit the boundaries of set B .
Each iteration removes at least one of the indices i or j from sets It+ and It . Eventually one of
these sets gets empty and the recursion stops after a finite number k of iterations. The other set is
also empty because
n n n
|yi zi (k)| |yi zi (k)| = yi zi (k) = yi zi (k) = 0.
iIk+ iIk i=1 i=1 i=1
1608
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
Therefore z(k) = y and u = y x = t (t) v(t). Moreover the v(t) are feasible directions on x be-
cause v(t) = ei e j with i It+ I0+ and j It I0 .
Assertion (iii) in Theorem 6 then yields the following necessary and sufficient optimality criterion
for the SVM QP problem (5):
f f
(i, j) {1 . . . n}2 xi < Bi and x j > A j (x) (x) 0.
xi x j
Different constraint sets call for different choices of witness family. For instance, it is sometimes
useful to disregard the equality constraint in the SVM polytope Fs . Along the lines of proposition 7,
it is quite easy to prove that {ei , i = 1 . . . n} is a witness family. Theorem 6 then yields an adequate
optimality criterion.
Proof We first show that F xF Cx . Indeed F Cx for all x because every point z F defines
T
a feasible direction z x Dx .
Conversely, Let z xF Cx and assume that z does not belong to F . Let z be the projection
T
z. Choose 0 < < (z, z z). We know that < 1 because z does not belong to F . But then
z + (z z) F is closer to z than z. This contradicts the definition of the projection z.
Proof Consider a point x F and let {v1 . . . vk } = U Dx . Proposition 4 and definition 5 imply
that Dx is the polyhedral cone {z = i vi , i 0} and can be represented (Schrijver, 1986) by a
finite number of linear equality and inequality constraints of the form nz 0 where the directions n
are unit vectors. Let Kx be the set of these unit vectors. Equality constraints arise when the set Kx
contains both n and n. Each set Kx depends only on the subset {v1 . . . vk } = U Dx of feasible
witness directions in x. Since the finite set U contains only a finite number of potential subsets,
there is only a finite number of distinct sets Kx .
Each set Cx is therefore represented by the constraints nz nx for n Kx . The intersection F =
xF Cx is then defined by all the constraints associated with Cx for any x F . These constraints
T
involve only a finite number of unit vectors n because there is only a finite number of distinct sets
Kx .
Inequalities defined by the same unit vector n can be summarized by considering only the most
restrictive right hand side. Therefore F is described by a finite number of equality and inequality
1609
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
Proof The polytope F is defined by a finite set of constraints n x b. Let KF be the set of pairs
(n, b) representing these constraints. Function x 7 (x, u) is a continuous on F because we can
write:
bnx
(x, u) = min for all (n, b) KF such that n u > 0 .
nu
Function x 7 (x, u) is uniformly continuous because it is continuous on the compact F .
Choose > 0 and let x, y F . Let the maximum f (x, u) be reached in x + u with 0
(x, u). Since f is uniformly continous on compact F , there is > 0 such that | f (x + u) f (y +
0 u)| < whenever kx y + ( 0 )uk < (1 + kuk). In particular, it is sufficient to have kx yk <
and | 0 | < . Since is uniformly continuous, there is > 0 such that |(y, u) (x, u)| <
whenever kx yk < . We can then select 0 0 (y, u) such that | 0 | < . Therefore, when
kx yk < min(, ), f (x, u) = f (x + u) f (y + 0 u) + f (y, u) + .
By reversing the roles of x and y in the above argument, we can similary establish that f (y, u)
f (x, u) + when kx yk min(, ). Function x 7 f (x, u) is therefore uniformly continuous on
F.
Clearly the Stochastic WDS algorithm does not work if the distributions Pt always give probabil-
ity zero to important directions. On the other hand, convergence is easily established if all feasible
directions can be drawn with non zero minimal probability at any time.
3. We believe that the converse of Theorem 9 is also true.
1610
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
Theorem 11 Let f be a concave function defined on a compact convex set F , differentiable with
continuous derivatives. Assume U is a finite witness set for set F , and let the sequence xt be defined
by the Stochastic WDS algorithm above. Further assume there is > 0 such that Pt (u) > for all
u U Dxt1 . All accumulation points of the sequence xt are then solutions of problem (14) with
probability 1.
Proof We want to evaluate the probability of event Q comprising all sequences of selected directions
(u1 , u2 , . . . ) leading to a situation where xt has an accumulation point x that is not a solution of
problem (14).
For each sequence of directions (u1 , u2 , . . . ), the sequence f (xt ) is increasing and bounded. It
converges to f = supt f (xt ). We have f (x ) = f because f is continuous. By Theorem 6, there is
a direction u U such that f (x , u) > f and (x , u) > 0. Let xkt be a subsequence converging to
x . Thanks to the continuity of , f and f , there is a t0 such that f (xkt , u) > f and (xkt , u) > 0
for all kt > t0 .
Choose > 0 and let QT Q contain only sequences of directions such that t0 = T . For any
kt > T , we know that (xkt , u) > 0 which means u U Dxkt . We also know that ukt 6= u because
we would otherwise obtain a contradiction f (xkt +1 ) = f (xkt , u) > f . The probability of selecting
such a ukt is therefore smaller than (1 ). The probability that this happens simultaneously for
N distinct kt T is smaller than (1 )N for any N. We get P(QT ) /T 2 by choosing N large
enough.
Then we have P(Q) = T P(QT ) T 1/T 2 = K. Hence P(Q) = 0 because we can choose
as small as we want, We can therefore assert with probability 1 that all accumulation points of
sequence xt are solutions.
This condition on the distributions Pt is unfortunately too restrictive. The PROCESS and RE-
PROCESS iterations of the Online LASVM algorithm (Section 3.2) only exploit directions from very
specific subsets.
On the other hand, the Online LASVM algorithm only ensures that any remaining feasible direc-
tion at time T will eventually be selected with probability 1. Yet it is challenging to mathematically
express that there is no coupling between the subset of time points t corresponding to a subsequence
converging to a particular accumulation point, and the subset of time points t corresponding to the
iterations where specific feasible directions are selected.
This problem also occurs in the deterministic Generalized SMO algorithm (Section 3.1). An
asymptotic convergence proof (Lin, 2001) only exist for the important case of the SVM QP problem
using a specific direction selection strategy. Following Keerthi and Gilbert (2002), we bypass this
technical difficulty by defining a notion of approximate optimum and proving convergence in finite
time. It is then easy to discuss the properties of the limit point.
u U , (x, u) or u0 f (x) .
A vector u Rn such that (x, u) > and u0 f (x) > is called a -violating direction in point x.
1611
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
This definition is inspired by assertion (iii) in Theorem 6. The definition demands a finite witness
family because this leads to proposition 13 establishing that -approximate solutions indicate the
location of actual solutions when and tend to zero.
Proposition 13 Let U be a finite witness family for bounded convex set F . Consider a sequence
xt F of t t -approximate solutions of problem (14) with t 0 and t 0. The accumulation
points of this sequence are solutions of problem (14).
Proof Consider an accumulation point x and a subsequence xkt converging to x . Define function
such that u is a -violating direction if and only if (x, , , u) > 0. Function is continuous thanks
to Theorem 9, proposition 10 and to the continuity of f . Therefore, we have (xkt , kt , kt , u) 0
for all u U . Taking the limit when kt gives (x , 0, 0, u) 0 for all u U . Theorem 6 then
states that x is a solution.
The following algorithm introduces the two tolerance parameters > 0 and > 0 into the Stochastic
Witness Direction Search algorithm.
The successive search directions ut are drawn from some unspecified distributions Pt defined on U .
Proposition 16 establishes that this algorithm always converges to some x F after a finite number
of steps, regardless of the selected directions (ut ). The proof relies on the two intermediate results
that generalize a lemma proposed by Keerthi and Gilbert (2002) in the case of quadratic functions.
Proof Let the maximum f (xt ) = f (xt1 , ut ) be attained in xt = xt1 + ut with 0 (xt1 , ut ).
We know that 6= 0 because ut is -violating and proposition 2 implies f (xt1 , ut ) > f (xt1 ).
If reaches its upper bound, (xt , ut ) = 0. Otherwise xt is an unconstrained maximum and
ut0 f (xt ) = 0.
1612
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
Proof The relation is obvious when ut is not a -violating direction in xt1 . Otherwise let the
maximum f (xt ) = f (xt1 , ut ) be attained in xt = xt1 + ut .
Let = with 0 < 1. Since xt is a maximum,
21
f (xt ) f (xt1 ) kxt xt1 k DH
U 2
Proposition 16 Assume U is a finite witness set for set F . The Approximate Stochastic WDS
algorithm converges to some x F after a finite number of steps.
Proof Sequence f (xt ) converges because it is increasing and bounded. Therefore it satisfies
Cauchys convergence criterion:
> 0, t0 , t2 > t1 > t0 ,
f (xt2 ) f (xt1 ) = f (xt ) f (xt1 ) < .
t1 <tt2
1613
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
In general, proposition 16 only holds for > 0 and > 0. Keerthi and Gilbert (2002) assert a similar
property for = 0 and > 0 in the case of SVMs only. Despite a mild flaw in the final argument of
the initial proof, this assertion is correct (Takahashi and Nishi, 2003).
Proposition 16 does not prove that the limit x is related to the solution of the optimization
problem (14). Additional assumptions on the direction selection step are required. Theorem 17 ad-
dresses the deterministic case by considering trivial distributions Pt that always select a -violating
direction if such directions exist. Theorem 18 addresses the stochastic case under mild conditions
on the distribution Pt .
Theorem 17 Let the concave function f defined on the compact convex set F be twice differen-
tiable with continuous second derivatives. Assume U is a finite witness set for set F , and let the
sequence xt be defined by the Approximate Stochastic WDS algorithm above. Assume that step
(2a) always selects a -violating direction in xt1 if such directions exist. Then xt converges to a
-approximate solution of problem (14) after a finite number of steps.
Proof Proposition 16 establishes that there is t0 such that xt = x for all t t0 . Assume there is
a -violating direction in x . For any t > t0 , step (2a) always selects such a direction, and step
(2b) makes xt different from xt1 = x . This contradicts the definition of t0 . Therefore there are no
-violating direction in x and x is a -approximate solution.
Example (SMO) The SMO algorithm (Section 3.1) is4 an Approximate Stochastic WDS that
always selects a -violating direction when one exists. Therefore Theorem 17 applies.
Theorem 18 Let the concave function f defined on the compact convex set F be twice differen-
tiable with continuous second derivatives. Assume U is a finite witness set for set F , and let the
sequence xt be defined by the Approximate Stochastic WDS algorithm above. Let pt be the condi-
tional probability that ut is -violating in xt1 given that U contains such directions. Assume that
lim sup pt > 0. Then xt converges with probability one to a -approximate solution of problem (14)
after a finite number of steps.
Proof Proposition 16 establishes that for each sequence of selected directions ut , there is a time
t0 and a point x F such that xt = x for all t t0 . Both t0 and x depend on the sequence of
directions (u1 , u2 , . . . ).
We want to evaluate the probability of event Q comprising all sequences of directions (u1 , u2 , . . . )
leading to a situation where there are -violating directions in point x . Choose > 0 and let
QT Q contain only sequences of decisions (u1 , u2 , . . . ) such that t0 = T .
Since lim sup pt > 0, there is a subsequence kt such that pkt > 0. For any kt > T , we know
that U contains -violating directions in xkt 1 = x . Direction ukt is not one of them because this
4. Strictly speaking we should introduce the tolerance > 0 into the SMO algorithm. We can also claim that (Keerthi
and Gilbert, 2002; Takahashi and Nishi, 2003) have established proposition 16 with = 0 and > 0 for the specific
case of SVMs. Therefore Theorems 17 and 18 remain valid.
1614
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
would make xkt different from xkt 1 = x . This occurs with probability 1 pkt 1 < 1. The
probability that this happens simultaneously for N distinct kt > T is smaller than (1 )N for any
N. We get P(QT ) /T 2 by choosing N large enough.
Then we have P(Q) = T P(QT ) T 1/T 2 = K. Hence P(Q) = 0 because we can choose
as small as we want. We can therefore assert with probability 1 that U contains no -violating
directions in point x .
Example (LASVM) The LASVM algorithm (Section 3.2) is5 an Approximate Stochastic WDS
that alternates two strategies for selecting search directions: PROCESS and REPROCESS. Theorem
18 applies because lim sup pt > 0.
Proof Consider a arbitrary iteration T corresponding to a REPROCESS.
Let us define the following assertions:
A There are -violating pairs (i, j) with both i S and j S .
B A is false, but there are -violating pairs (i, j) with either i S or j S .
C A and B are false, but there are -violating pairs (i, j).
Qt Direction ut is -violating in xt1 .
A reasoning similar to the convergence discussion in Section 3.2 gives the following lower bounds
(where n is the total number of examples).
P(QT |A) = 1
P(QT |B) = 0 P(QT +1 |B) n1
P(QT |C) = 0 P(QT +1 |C) = 0 P(QT +2 |C) = 0 P(QT +3 |C) n2 .
Therefore
P( QT QT +1 QT +2 QT +2 | A ) n2
P( QT QT +1 QT +2 QT +2 | B ) n2
P( QT QT +1 QT +2 QT +2 | C ) n2 .
Since pt = P(Qt | A B C) and since the events A, B, and C are disjoint, we have
pT + pT +1 + pT +2 + pT +4 P( QT QT +1 QT +2 QT +2 | A B C ) n2 .
Example (LASVM + Gradient Selection) The LASVM algorithm with Gradient Example Selec-
tion remains an Approximate WDS algorithm. Whenever Random Example Selection has a non
zero probability to pick a -violating pair, Gradient Example Selection picks the a -violating pair
with maximal gradient with probability one. Reasoning as above yields lim sup pt 1. Therefore
Theorem 18 applies and the algorithm converges to a solution of the SVM QP problem.
Example (LASVM + Active Selection + Randomized Search) The LASVM algorithm with Ac-
tive Example Selection remains an Approximate WDS algorithm. However it does not necessarily
verify the conditions of Theorem 18. There might indeed be -violating pairs that do not involve the
example closest to the decision boundary.
However, convergence occurs when one uses the Randomized Search method to select an ex-
ample near the decision boundary. There is indeed a probability greater than 1/nM to draw a sample
5. See footnote 4 discussing the tolerance in the case of SVMs.
1615
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
containing M copies of the same example. Reasonning as above yields lim sup pt 41 n2M . There-
fore, Theorem 18 applies and the algorithm eventually converges to a solution of the SVM QP
problem.
In practice this convergence occurs very slowly because it involves very rare events. On the other
hand, there are good reasons to prefer the intermediate kernel classifiers visited by this algorithm
(see Section 4).
References
M. A. Aizerman, E. M. Braverman, and L. I. Rozonoer. Theoretical foundations of the potential
function method in pattern recognition learning. Automation and Remote Control, 25:821837,
1964.
N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society,
68:337404, 1950.
G. Bakr, L. Bottou, and J. Weston. Breaking SVM complexity with cross-training. In Lawrence
Saul, Bernhard Scholkopf, and Leon Bottou, editors, Advances in Neural Information Processing
Systems, volume 17, pages 8188. MIT Press, 2005.
A. Bordes and L. Bottou. The Huller: a simple and efficient online SVM. In Proceedings of
the 16th European Conference on Machine Learning (ECML2005), Lecture Notes in Artificial
Intelligence, to appear. Springer, 2005.
L. Bottou, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, L.D. Jackel, Y. LeCun, U. A. Muller,
E. Sackinger, P. Simard, and V. Vapnik. Comparison of classifier methods: a case study in
handwritten digit recognition. In Proceedings of the 12th IAPR International Conference on
Pattern Recognition, Conference B: Computer Vision & Image Processing., volume 2, pages 77
82, Jerusalem, October 1994. IEEE.
L. Bottou and Y. LeCun. On-line learning for very large datasets. Applied Stochastic Models in
Business and Industry, 21(2):137151, 2005.
C. Campbell, N. Cristianini, and A. J. Smola. Query learning with large margin classifiers. In
Proceedings of ICML2000, 2000.
G. Cauwenberghs and T. Poggio. Incremental and decremental support vector machine learning. In
Advances in Neural Processing Systems, 2001.
N. Cesa-Bianchi, C. Gentile, and L. Zaniboni. Worst-case analysis of selective sampling for linear-
threshold algorithms. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Infor-
mation Processing Systems 17, pages 241248. MIT Press, Cambridge, MA, 2005.
C.-C. Chang and C.-J. Lin. LIBSVM : a library for support vector machines. Technical re-
port, Computer Science and Information Engineering, National Taiwan University, 2001-2004.
http://www.csie.ntu.edu.tw/cjlin/libsvm.
D. Cohn, L. Atlas, and R. Ladner. Training connectionist networks with queries and selective
sampling. In D. Touretzky, editor, Advances in Neural Information Processing Systems 2, San
Mateo, CA, 1990. Morgan Kaufmann.
1616
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
R. Collobert and S. Bengio. SVMTorch: Support vector machines for large-scale regression prob-
lems. Journal of Machine Learning Research, 1:143160, 2001.
R. Collobert, S. Bengio, and Y. Bengio. A parallel mixture of SVMs for very large scale problems.
In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information
Processing Systems 14, Cambridge, MA, 2002. MIT Press.
C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20:273297, 1995.
K. Crammer, J. Kandola, and Y. Singer. Online classification on a budget. In Sebastian Thrun,
Lawrence Saul, and Bernhard Scholkopf, editors, Advances in Neural Information Processing
Systems 16. MIT Press, Cambridge, MA, 2004.
K. Crammer and Y. Singer. Ultraconservative online algorithms for multiclass problems. Journal
of Machine Learning Research, 3:951991, 2003.
N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines and other kernel-
based learning methods. Cambridge University Press, Cambridge, UK, 2000.
C. Domingo and O. Watanabe. MadaBoost: a modification of AdaBoost. In Proceedings of the 13th
Annual Conference on Computational Learning Theory, COLT00, pages 180189, 2000.
B. Eisenberg and R. Rivest. On the sample complexity of PAC learning using random and chosen
examples. In M. Fulk and J. Case, editors, Proceedings of the Third Annual ACM Workshop on
Computational Learning Theory, pages 154162, San Mateo, CA, 1990. Kaufmann.
V. V. Fedorov. Theory of Optimal Experiments. Academic Press, New York, 1972.
Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. In
J. Shavlik, editor, Machine Learning: Proceedings of the Fifteenth International Conference, San
Francisco, CA, 1998. Morgan Kaufmann.
T.-T. Frie, N. Cristianini, and C. Campbell. The kernel Adatron algorithm: a fast and simple
learning procedure for support vector machines. In J. Shavlik, editor, 15th International Conf.
Machine Learning, pages 188196. Morgan Kaufmann Publishers, 1998. See (Cristianini and
Shawe-Taylor, 2000, section 7.2) for an updated presentation.
C. Gentile. A new approximate maximal margin classification algorithm. Journal of Machine
Learning Research, 2:213242, 2001.
H.-P. Graf, E. Cosatto, L. Bottou, I. Dourdanovic, and V. Vapnik. Parallel support vector machines:
The Cascade SVM. In Lawrence Saul, Bernhard Scholkopf, and Leon Bottou, editors, Advances
in Neural Information Processing Systems, volume 17. MIT Press, 2005.
I. Guyon, B. Boser, and V. Vapnik. Automatic capacity tuning of very large VC-dimension classi-
fiers. In S. J. Hanson, J. D. Cowan, and C. Lee Giles, editors, Advances in Neural Information
Processing Systems, volume 5, pages 147155. Morgan Kaufmann, San Mateo, CA, 1993.
T. Joachims. Making largescale SVM learning practical. In B. Scholkopf, C. J. C. Burges, and
A. J. Smola, editors, Advances in Kernel Methods Support Vector Learning, pages 169184,
Cambridge, MA, 1999. MIT Press.
1617
B ORDES , E RTEKIN , W ESTON , AND B OTTOU
S. S. Keerthi and E. G. Gilbert. Convergence of a generalized SMO algorithm for SVM classifier
design. Machine Learning, 46:351360, 2002.
Y. Li and P. Long. The relaxed online maximum margin algorithm. Machine Learning, 46:361387,
2002.
C.-J. Lin. On the convergence of the decomposition method for support vector machines. IEEE
Transactions on Neural Networks, 12(6):12881298, 2001.
N. Littlestone and M. Warmuth. Relating data compression and learnability. Technical report,
University of California Santa Cruz, 1986.
G. Loosli, S. Canu, S.V.N. Vishwanathan, A. J. Smola, and M. Chattopadhyay. Une bote a outils
rapide et simple pour les SVM. In Michel Liquiere and Marc Sebban, editors, CAp 2004 -
Confrence dApprentissage, pages 113128. Presses Universitaires de Grenoble, 2004. ISBN
9-782706-112249.
D. J. C. MacKay. Information based objective functions for active data selection. Neural Computa-
tion, 4(4):589603, 1992.
N. Murata and S.-I. Amari. Statistical analysis of learning dynamics. Signal Processing, 74(1):
328, 1999.
J. Platt. Fast training of support vector machines using sequential minimal optimization. In
B. Scholkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods Sup-
port Vector Learning, pages 185208, Cambridge, MA, 1999. MIT Press.
F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in
the brain. Psychological Review, 65(6):386408, 1958.
G. Schohn and D. Cohn. Less is more: Active learning with support vector machines. In Pat
Langley, editor, Proceedings of the Seventeenth International Conference on Machine Learning
(ICML 2000), pages 839846. Morgan Kaufmann, June 2000.
B. Scholkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
A. Schrijver. Theory of Linear and Integer Programming. John Wiley and Sons, New York, 1986.
1618
FAST K ERNEL C LASSIFIERS WITH O NLINE AND ACTIVE L EARNING
N. Takahashi and T. Nishi. On termination of the SMO algorithm for support vector machines.
In Proceedings of International Symposium on Information Science and Electrical Engineering
2003 (ISEE 2003), pages 187190, November 2003.
S. Tong and D. Koller. Support vector machine active learning with applications to text classi-
fication. In P. Langley, editor, Proceedings of the 17th International Conference on Machine
Learning, San Francisco, California, 2000. Morgan Kaufmann.
I. W. Tsang, J. T. Kwok, and P.-M. Cheung. Very large SVM training using core vector machines.
In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics (AIS-
TAT05). Society for Artificial Intelligence and Statistics, 2005.
V. Vapnik and A. Lerner. Pattern recognition using generalized portrait method. Automation and
Remote Control, 24:774780, 1963.
J. Weston, A. Bordes, and L. Bottou. Online (and offline) on an even tighter budget. In Robert G.
Cowell and Zoubin Ghahramani, editors, Proceedings of the Tenth International Workshop on
Artificial Intelligence and Statistics, Jan 6-8, 2005, Savannah Hotel, Barbados, pages 413420.
Society for Artificial Intelligence and Statistics, 2005.
1619