0% found this document useful (0 votes)
81 views28 pages

Best Generalisation Error PDF

This document summarizes previous work on the stability of machine learning algorithms and introduces new notions of stability. It discusses how stability relates to generalization error, where stable algorithms that do not change much with small changes to the training set are likely to generalize well. Prior work established relationships between stability measures like hypothesis stability and bounds on errors like leave-one-out error. This paper aims to expand on these results by defining new stability notions and using concentration inequalities to derive tighter bounds on generalization error for stable learning algorithms.

Uploaded by

lohit12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views28 pages

Best Generalisation Error PDF

This document summarizes previous work on the stability of machine learning algorithms and introduces new notions of stability. It discusses how stability relates to generalization error, where stable algorithms that do not change much with small changes to the training set are likely to generalize well. Prior work established relationships between stability measures like hypothesis stability and bounds on errors like leave-one-out error. This paper aims to expand on these results by defining new stability notions and using concentration inequalities to derive tighter bounds on generalization error for stable learning algorithms.

Uploaded by

lohit12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Journal of Machine Learning Research 2 (2002) 499-526

Submitted 7/01; Published 3/02

Stability and Generalization


Olivier Bousquet

[email protected]

CMAP, Ecole Polytechnique


F-91128 Palaiseau, FRANCE

Andr
e Elisseeff

[email protected]

BIOwulf Technologies
305 Broadway,
New-York, NY 10007

Editor: Dana Ron

Abstract
We define notions of stability for learning algorithms and show how to use these notions to
derive generalization error bounds based on the empirical error and the leave-one-out error. The
methods we use can be applied in the regression framework as well as in the classification one when
the classifier is obtained by thresholding a real-valued function. We study the stability properties
of large classes of learning algorithms such as regularization based algorithms. In particular we
focus on Hilbert space regularization and Kullback-Leibler regularization. We demonstrate how
to apply the results to SVM for regression and classification.

1. Introduction
A key issue in the design of efficient Machine Learning systems is the estimation of the accuracy of
learning algorithms. Among the several approaches that have been proposed to this problem, one of
the most prominent is based on the theory of uniform convergence of empirical quantities to their
mean (see e.g. Vapnik, 1982). This theory provides ways to estimate the risk (or generalization
error) of a learning system based on an empirical measurement of its accuracy and a measure of its
complexity, such as the Vapnik-Chervonenkis (VC) dimension or the fat-shattering dimension (see
e.g. Alon et al., 1997).
We explore here a different approach which is based on sensitivity analysis. Sensitivity analysis
aims at determining how much the variation of the input can influence the output of a system.1
It has been applied to many areas such as statistics and mathematical programming. In the latter
domain, it is often referred to as perturbation analysis (see Bonnans and Shapiro, 1996, for a survey).
The motivation for such an analysis is to design robust systems that will not be affected by noise
corrupting the inputs.
In this paper, the objects of interest are learning algorithms. They take as input a learning set
made of instance-label pairs and output a function that maps instances to the corresponding labels.
The sensitivity in that case is thus related to changes of the outcome of the algorithm when the
learning set is changed. There are two sources of randomness such algorithms have to cope with: the
first one comes from the sampling mechanism used to generate the learning set and the second one is
due to noise in the measurements (on the instance and/or label). In contrast to standard approaches
to sensitivity analysis, we mainly focus on the sampling randomness and we thus are interested in
how changes in the composition of the learning set influence the function produced by the algorithm.
The outcome of such an approach is a principled way of getting bounds on the difference between
1. For a qualitative discussion about sensitivity
http://sensitivity-analysis.jrc.cec.eu.int/

c
Olivier Bousquet and Andr
e Elisseeff.
2002

analysis

with

links

to

other

resources

see

e.g.

Bousquet and Elisseeff

empirical and generalization error. These bounds are obtained using powerful statistical tools known
as concentration inequalities. The latter are the mathematical device corresponding to the following
statement, from Talagrand (1996):
A random variable that depends (in a smooth way) on the influence of many independent variables (but not too much on any of them) is essentially constant.
The expression essentially constant actually means that the random variable will have, with
high probability, a value close to its expected value. We will apply these inequalities to the random
variable we are interested in, that is, the difference between an empirical measure of error and the
true generalization error. We will see that this variable has either a zero expectation or it has a
nice property: the condition under which it concentrates around its expectation implies that its
expectation is close to zero. That means that if we impose conditions on the learning system such
that the difference between the empirical error and the true generalization error is roughly constant,
then this constant is zero. This observation and the existence of concentration inequalities will allow
us to state exponential bounds on the generalization error of a stable learning system.
The outline of the paper is as follows: after reviewing previous work in the area of stability
analysis of learning algorithms, we introduce three notions of stability (Section 3) and derive bounds
on the generalization error of stable learning systems (Section 4). In Section 5, we show that many
existing algorithms such as SVM for classification and regression, ridge regression or variants of
maximum relative entropy discrimination do satisfy the stability requirements. For each of these
algorithms, it is then possible to derive original bounds which have many attractive properties.
Previous work
It has long been known that when trying to estimate an unknown function from data, one needs to
find a tradeoff between bias and variance.2 Indeed, on one hand, it is natural to use the largest model
in order to be able to approximate any function, while on the other hand, if the model is too large,
then the estimation of the best function in the model will be harder given a restricted amount of
data. Several ideas have been proposed to fight against this phenomenon. One of them is to perform
estimation in several models of increasing size and then to choose the best estimator based on a
complexity penalty (e.g. Structural Risk Minimization). This allows to control the complexity while
allowing to use a large model. This technique is somewhat related to regularization procedures that
we will study in greater detail in subsequent sections. Another idea is to use statistical procedures
to reduce the variance without altering the bias. One such technique is the bagging approach of
Breiman (1996a) which consists in averaging several estimators built from random subsamples of
the data.
Although it is generally accepted that having a low variance (or a high stability in our terminology) is a desirable property for a learning algorithm, there are few quantitative results relating
the generalization error to the stability of the algorithm with respect to changes in the training set.
The first such results were obtained by Devroye, Rogers and Wagner in the seventies (see Rogers
and Wagner, 1978, Devroye and Wagner, 1979a,b). Rogers and Wagner (1978) first showed that
the variance of the leave-one-out error can be upper bounded by what Kearns and Ron (1999) later
called hypothesis stability. This quantity measures how much the function learned by the algorithm
will change when one point in the training set is removed. The main distinctive feature of their
approach is that, unlike VC-theory based approaches where the only property of the algorithm that
matters is the size of the space to be searched, it focuses on how the algorithm searches the space.
This explains why it has been successfully applied to the k-Nearest Neighbors algorithm (k-NN)
whose search space is known to have an infinite VC-dimension. Indeed, results from VC-theory
2. We deliberately do not provide a precise definition of bias and variance and resort to common intuition about
these notions. In broad terms, the bias is the best error that can be achieved and the variance is the difference
between the typical error and the best error.

500

Stability and Generalization

would not be of any help in that case since they are meaningful when the learning algorithm performs minimization of the empirical error in the full function space. However, the k-NN algorithm
is very stable because of its locality. This allowed Rogers and Wagner to get an upper bound on
the difference between the leave-one-out error and the generalization error of such a classifier. These
results were later extended to obtain bounds on the generalization error of k-local rules in Devroye
and Wagner (1979a), and of potential rules in Devroye and Wagner (1979b).
In the early nineties, concentration inequalities became popular in the probabilistic analysis
of algorithms, due to the work of McDiarmid (1989) and started to be used as tools to derive
generalization bounds for learning algorithms by Devroye (1991). Building on this technique, Lugosi
and Pawlak (1994) obtained new bounds for the k-NN, kernel rules and histogram rules. These
bounds used smoothed estimates of the error which estimate the posterior probability of error
instead of simply counting the errors. This smoothing is very much related to the use of realvalued classifiers and we will see that it is at the heart of the applicability of stability analysis to
classification algorithms. A comprehensive account of the application of McDiarmids inequality to
obtain bounds for the leave-one-out error or the smoothed error of local classifiers can be found in
Devroye et al. (1996).
Independently from this theoretical analysis, practical methods have been developed to deal with
instability of learning algorithms. In particular, Breiman (1996a,b) introduced the Bagging technique
which is presented as a method to combine single classifiers in such a way that the variance of the
overall combination is decreased. However, there is no theoretical guarantee that this variance
reduction will bring an improvement on the generalization error.
Finally, a more recent work has shown an interesting connection between stability and VCtheory. Kearns and Ron (1999) derived what they called sanity-check bounds. In particular, they
proved that an algorithm having a search space of finite VC-dimension, is stable in the sense that its
stability (in a sense to be defined later) is bounded by its VC-dimension. Thus using the stability
as a complexity measure does not give worse bounds than using the VC-dimension.
The work presented here follows and extends the stability approach of Lugosi and Pawlak (1994)
in that we derive exponential upper bounds on the generalization error based on notions of stability.
It is based on earlier results presented in Bousquet and Elisseeff (2001). We consider both the leaveone-out error and the empirical error as possible estimates of the generalization error. We prove
stability bounds for a large class of algorithms which includes the Support Vector Machines, both in
the regression and in the classification cases. Also we generalize some earlier results from Devroye
and Wagner.

2. Preliminaries
We first introduce some notation and then the main tools we will use to derive inequalities.
2.1 Notations
X and Y R being respectively an input and an output space, we consider a training set
S = {z1 = (x1 , y1 ), .., zm = (xm , ym )} ,
of size m in Z = X Y drawn i.i.d. from an unknown distribution D. A learning algorithm
is a function A from Z m into F Y X which maps a learning set S onto a function AS from X
to Y. To avoid complex notation, we consider only deterministic algorithms. It is also assumed
that the algorithm A is symmetric with respect to S, i.e. it does not depend on the order of the
elements in the training set. Furthermore, we assume that all functions are measurable and all sets
are countable which does not limit the interest of the results presented here.
Given a training set S of size m, we will build, for all i = 1. . . . , m, modified training sets as
follows:
501

Bousquet and Elisseeff

By removing the i-th element


S \i = {z1 , . . . , zi1 , zi+1 , . . . , zm } .
By replacing the i-th element
S i = {z1 , . . . , zi1 , zi0 , zi+1 , . . . , zm } .
where the replacement example zi0 is assumed to be drawn from D and is independent from S.
Unless they are clear from context, the random variables over which we take probabilities and
expectation will be specified in subscript. We thus introduce the notation PS [.] and ES [.] to denote
respectively the probability and the expectation with respect to the random draw of the sample
S of size m (drawn according to Dm ). Similarly, Pz [.] and Ez [.] will denote the probability and
expectation when z is sampled according to D.
In order to measure the accuracy of the predictions of the algorithm, we will use a cost function
c : Y Y R+ . The loss of an hypothesis f with respect to an example z = (x, y) is then defined
as
`(f, z) = c(f (x), y) .
We will consider several measures of the performance of an algorithm. The main quantity we are
interested in is the risk or generalization error. This is a random variable depending on the training
set S and it is defined as
R(A, S) = Ez [`(AS , z)] .
Unfortunately, R cannot be computed since D is unknown. We thus have to estimate it from the
available data S. We will consider several estimators for this quantity.
The simplest estimator is the so-called empirical error (also known as resubstitution estimate)
defined as
m

Remp (A, S) =

1 X
`(AS , zi ) .
m i=1

Another classical estimator is the leave-one-out error (also known as deleted estimate) defined as
m

Rloo (A, S) =

1 X
`(AS \i , zi ) .
m i=1

When the algorithm is clear from context, we will simply write R(S), Remp (S) and Rloo (S).
We will often simplify further the notations when the training sample is clear from context. In
particular, we will use the following shorthand notations R R(A, S), Remp Remp (A, S), and
Rloo Rloo (A, S).
2.2 Main Tools
The study we describe here intends to bound the difference between empirical and generalization
error for specific algorithms. For any > 0 our goal is to bound the term
PS [|Remp (A, S) R(A, S)| > ] ,
which differs from what is usually studied in learning theory
"

PS sup |Remp (f ) R(f )| > .


f F

502

(1)

(2)

Stability and Generalization

Indeed, we do not want to have a bound that holds uniformly over the whole space of possible
functions since we are interested in algorithms that may not explore it. Moreover we may not even
have a way to describe this space and assess its size. This explains why we want to focus on (1).
Our approach is based on inequalities that relate moments of multi-dimensional random functions
to their first order finite differences. The first one is due to Steele (1986) and provides bounds for the
variance. The second one is a version of Azumas inequality due to McDiarmid (1989) and provides
exponential bounds but its assumptions are more restrictive.
Theorem 1 (Steele, 1986) Let S and S i defined as above, let F : Z m R be any measurable
function, then
m
i 1X
h
h
2 i
2
ES (F (S) ES [F (S)])
ES,zi0 F (S) F (S i )
2 i=1

Theorem 2 (McDiarmid, 1989) Let S and S i defined as above, let F : Z m R be any measurable function for which there exists constants ci (i = 1, . . . , m) such that

F (S) F (S i ) ci ,
sup
SZ m ,zi0 Z

then

PS [F (S) ES [F (S)] ] e2

Pn

i=1

c2i

3. Defining the Stability of a Learning Algorithm


There are many ways to define and quantify the stability of a learning algorithm. The natural way of
making such a definition is to start from the goal: we want to get bounds on the generalization error
of specific learning algorithm and we want these bounds to be tight when the algorithm satisfies the
stability criterion.
As one may expect, the more restrictive a stability criterion is, the tighter the corresponding
bound will be.
In the learning model we consider, the randomness comes from the sampling of the training set.
We will thus consider stability with respect to changes in the training set. Moreover, we need an
easy to check criterion so that we will consider only restricted changes such as the removal or the
replacement of one single example in the training set.
Although not explicitly mentioned in their work, the first such notion was used by Devroye and
Wagner (1979a) in order to get bounds on the variance of the error of local learning algorithms.
Later, Kearns and Ron (1999) stated it as a definition and gave it a name. We give here a slightly
modified version of Kearns and Rons definition that suits our needs.
Definition 3 (Hypothesis Stability) An algorithm A has hypothesis stability with respect to
the loss function ` if the following holds
i {1, . . . , m}, ES,z [|`(AS , z) `(AS \i , z)|] .

(3)

Note that this is the L1 norm with respect to D, so that we can rewrite the above as
ES [k`(AS , .) `(AS \i , .)k1 ]
We will also use a variant of the above definition in which instead of measuring the average
change, we measure the change at one of the training points.
Definition 4 (Pointwise Hypothesis Stability) An algorithm A has pointwise hypothesis stability with respect to the loss function ` if the following holds
i {1, . . . , m}, ES [|`(AS , zi ) `(AS \i , zi )|] .
503

(4)

Bousquet and Elisseeff

Another, weaker notion of stability was introduced by Kearns and Ron. It consists of measuring the
change in the expected error of the algorithm instead of the average pointwise change.
Definition 5 (Error Stability) An algorithm A has error stability with respect to the loss function ` if the following holds
S Z m , i {1, . . . , m}, |Ez [`(AS , z)] Ez [`(AS \i , z)] | ,

(5)

which can also be written


S Z m , i {1, . . . , m}, |R(S) R\i (S)| .

(6)

Finally, we introduce a stronger notion of stability which will allow to get tight bounds. Moreover
we will show that it can be applied to large classes of algorithms.
Definition 6 (Uniform Stability) An algorithm A has uniform stability with respect to the loss
function ` if the following holds
S Z m , i {1, . . . , m}, k`(AS , .) `(AS \i , .)k .

(7)

Notice that (3) implies (5) and (7) implies (3) so that uniform stability is the strongest notion.
Considered as a function of m, the term will sometimes be denoted by m . We will say that
1
. An algorithm with uniform stability
an algorithm is stable when the value of m decreases as m
has also the following property:
S, zi0 , |`(AS , z) `(AS i , z)| |`(AS , z) `(AS \i , z))| + |`(AS i , z) `(AS \i , z)| 2 .
In other words, stability with respect to the exclusion of one point implies stability with respect
to changes of one point.
We will assume further that as a function of the sample size, the stability is non-increasing. This
will be the case in all our examples. This assumption is not restrictive since its only purpose is to
simplify the statement of the theorems (we will always upper bound m1 by m ).

4. Generalization Bounds for Stable Learning Algorithms


We start this section by introducing a useful lemma about the bias of the estimators we study.
Lemma 7 For any symmetric learning algorithm A, we have i {1, .., m}:
ES [R(A, S) Remp (A, S)] = ES,zi0 [`(AS , zi0 ) `(AS i , zi0 )] ,
and
and

h
i
ES R(A, S \i ) Rloo (A, S) = 0 ,
ES [R(A, S) Rloo (A, S)] = ES,z [`(AS , z) `(AS \i , z)] ,

Proof For the first equality, we just need to compute the expectation of Remp (A, S). We have
m

ES [Remp (S)] =

1 X
1 X
ES [`(AS , zj )] =
ES,zi0 [`(AS , zj )] ,
m j=1
m j=1

and renaming zj as zi0 we get, i {1, .., m}


ES [Remp (S)] = ES,zi0 [`(AS i , zi0 )] ,
504

Stability and Generalization

by the i.i.d. and the symmetry assumptions. This proves the first equality. Similarly we have
m

ES [Rloo (S)] =

1 X
1 X
ES [`(AS \i , zi )] =
ES,z [`(AS \i , z)] ,
m i=1
m i=1

from which we deduce the second and third equalities.

Remark 8 We notice from the above lemma, comparing the first and last equalities, that the empirical error and the leave-one-out error differ from the true error in a similar way. It is usually
accepted that the empirical error is very much optimistically biased while the leave-one-out error
is almost unbiased (due to the second equation of the lemma). However, we will see that for the
particular algorithms we have in mind (which display high stability), the two estimators are very
close to each other. The similarity of the bounds we will derive for both estimators will be striking.
This can be explained intuitively by the fact that we are considering algorithms that do not directly
minimize the empirical error but rather a regularized version of it, so that the bias in the empirical
error will be reduced.
4.1 Polynomial Bounds with Hypothesis Stability
In this section we generalize a lemma from Devroye and Wagner (1979b). Their approach consists in
bounding the second order moment of the estimators with the hypothesis stability of the algorithm.
For this purpose, one could simply use Theorem 1. However this theorem gives a bound on the
variance and we need here the second order moment of the difference between the error (leave-oneout or empirical) and the generalization error. It turns out that a direct study of this quantity leads
to better constants than the use of Theorem 1.
Lemma 9 For any learning algorithm A and loss function ` such that 0 c(y, y 0 ) M we have
for any i, j {1, . . . , m}, i =
6 j for the empirical error,

and

M2
ES (R Remp )2
+ 3M ES,zi0 [|`(AS , zi ) `(AS i , zi )|] ,
2m

ES (R Remp )2

(8)

M2
+ M ES,zi0 ,z [|`(AS , z) `(AS i , z)|]
2m
+M ES,zi0 [|`(AS , zj ) `(AS i , zj )|]
+M ES,zi0 [|`(AS , zi ) `(AS i , zi )|] ,

and for the leave-one-out error,

and

M2
ES (R Rloo )2
+ 3M ES,z [|`(AS , z) `(AS \i , z)|] ,
2m

M2
ES (R Rloo )2
+ 2M ES,zi0 ,z [|`(AS , z) `(AS i , z)| + |`(AS , z) `(AS \i , z)|] .
2m
The proof of this lemma is given in the appendix.

(9)

(10)

Remark 10 Notice that Devroye and Wagners work focused on the leave-one-out estimator and
on classification. We extend it to regression and to the empirical estimator, which they treated with
the following easy-to-prove inequality

ES (R Remp )2 2ES (R Rloo )2 + 2M ES [|`(AS , zi ) `(AS \i , zi )|] ,


which gives a similar result but with worse constants.
505

Bousquet and Elisseeff

Lets try to explain the various quantities that appear in the upper bounds of the above lemma. We
2
notice that the term M
2m is always present and it cannot be avoided even for a very stable algorithm
and somehow corresponds to the bias of the estimator. In Inequality (8), the expectation in the
right-hand side corresponds to the following situation: starting from training set S we measure the
error at point zi S, then we replace zi S by zi0 and we again measure the error at zi which is
no longer in the training set. Then, in the second inequality of Lemma 9 several different quantities
appear. They all correspond to comparing the algorithm trained on S and on S i (where zi is replaced
by zi0 ) but the comparison point differs: it is either z, a point which is not part of the training set,
or zj , a point of the training set different from zi or finally zi .
For the leave-one-out error, in (9) we consider the average difference in error when trained on
S and on S \i (where zi has been removed) and in (10), the first expectation in the right hand side
corresponds to the average difference in error when one point is changed while the second one is the
average difference in error when one point is removed.
All these quantities capture a certain aspect of the stability of the algorithm. In order to use the
lemma, we need to bound them for specific algorithms. Instead of using all these different quantities,
we will rather focus on the few notions of stability we introduced and see how they are related. We
will see later how they can be computed (or upper bounded) in particular cases.
Now that we have a bound on the expected squared deviation of the estimator to the true
error, the next step is to use Chebyshevs inequality in order to get a bound which holds with high
probability on the deviation.
Theorem 11 For any learning algorithm A with hypothesis stability 1 and pointwise hypothesis
stability 2 with respect to a loss function ` such that 0 c(y, y 0 ) M , we have with probability
1 ,
r
M 2 + 12M m2
,
R(A, S) Remp (A, S) +
2m
and
r
M 2 + 6M m1
R(A, S) Rloo (A, S) +
.
2m
Proof First, notice that for all S and all z,
|`(AS , z) `(AS i , z)| |`(AS , z) `(AS \i , z)| + |`(AS \i , z) `(AS i , z)| ,
so that we get
ES,zi0 [|`(AS , zi ) `(AS i , zi )|] ES [|`(AS , zi ) `(AS \i , zi )|] + ES,zi0 [|`(AS \i , zi ) `(AS i , zi )|] 22 .
We thus get by (8)

Also, we have by (10)

M2
ES (R Remp )2
+ 6M 2 .
2m

M2

ES (R Rloo )2
+ 3M 1 .
2m
Now, recall that Chebyshevs inequality gives for a random variable X

E X2
,
P [X ]
2
which in turn gives that for all > 0, with probability at least 1 ,
r
E [X 2 ]
.
X

506

Stability and Generalization

Applying this to R Remp and R Rloo respectively give the result.


As pointed out earlier, there is a striking similarity between the above bounds which seems to
support the fact that for a stable algorithm, the two estimators that we are considering have a closely
related behavior.
In the next section we will see how to use the exponential inequality of Theorem 2 to get better
bounds.
4.2 Exponential Bounds with Uniform Stability
Devroye and Wagner (1979a) first proved exponential bounds for k-local algorithms. However, the
question of whether their technique can be extended to more general classes of algorithms is a topic
for further research.
In Devroye et al. (1996) another, more general technique is introduced which relies on concentration inequalities. Inspired by this approach, we will derive exponential bounds for algorithms based
on their uniform stability.
We will study separately the regression and the classification cases for reasons that will be made
clear.
4.2.1 Regression Case
A stable algorithm has the property that removing one element in its learning set does not change
much of its outcome. As a consequence, the difference between empirical and generalization error,
if thought as a random variable, should have a small variance. If its expectation is small, stable
algorithms should then be good candidates for their empirical error to be close to their generalization
error. This assertion is formulated in the following theorem:
Theorem 12 Let A be an algorithm with uniform stability with respect to a loss function ` such
that 0 `(AS , z) M , for all z Z and all sets S. Then, for any m 1, and any (0, 1), the
following bounds hold (separately) with probability at least 1 over the random draw of the sample
S,
r
ln 1/
R Remp + 2 + (4m + M )
,
(11)
2m
and
r
ln 1/
R Rloo + + (4m + M )
(12)
.
2m
Remark 13 This theorem gives tight bounds when the stability scales as 1/m. We will prove that
this is the case for several known algorithms in later sections.
Proof Lets prove that the conditions of Theorem 2 are verified by the random variables of interest.
First we study how these variables change when one training example is removed. We have
|R R\i | Ez [|`(AS , z) `(AS \i , z)|] ,
and
\i
|Remp Remp
|

1
1 X
|`(AS , zi )|
|`(AS , zj ) `(AS \i , zj )| +
m
m
j6=i

M
.
m

507

(13)

Bousquet and Elisseeff

Then we upper bound the variation when one training example is changed:
|R Ri | |R R\i | + |R\i Ri | 2 .
Similarly we can write
i
\i
i
\i
|Remp Remp
| |Remp Remp
Remp
| + |Remp
| 2 + 2

M
.
m

however, a closer look reveals that the second factor of 2 is not needed. Indeed, we have
i
|Remp Remp
|

1
1 X
|`(AS , zj ) `(AS i , zj )| +
|`(AS , zi ) `(AS i , zi0 )|
m
m
j6=i

1 X
1 X
|`(AS , zj ) `(AS \i , zj )| +
|`(AS \i , zj ) `(AS i , zj )|
m
m
j6=i

j6=i

1
+ |`(AS , zi ) `(AS i , zi0 )|
m
M
2 +
.
m

Thus the random variable R Remp satisfies the conditions of Theorem 2 with ci = 4 + M
m.
It thus remains to bound the expectation of this random variable which can be done using Lemma
7 and the -stability property:
ES [R Remp ]

ES,zi0 [|`(AS , zi0 ) `(AS i , zi0 )|]

ES,zi0 [|`(AS i , zi0 ) `(AS \i , zi0 )|] + ES,zi0 [|`(AS \i , zi0 ) `(AS , zi0 )|]

2 .

Which yields
PS [R Remp

> + 2m ] exp

2m2
(4mm + M )2

Thus, setting the right hand side to , we obtain that with probability at least 1 ,
r
ln 1/
R Remp + 2m + (4mm + M )
,
2m
and thus

R Remp + 2m 1 +

2m ln 1/ + M

ln 1/
,
2m

which gives, (11)


For the leave-one-out error, we proceed similarly. We have
\i

|Rloo Rloo |

1 X
1
|`(AS \i , zi )|
|`(AS \j , zj ) `(AS \i,j , zj )| +
m
m
j6=i

m1 +

M
,
m

and also
i
|Rloo Rloo
| 2m1 +

508

M
M
2m +
.
m
m

Stability and Generalization

So that Theorem 2 can be applied to R Rloo with ci = 4m + M


m . Then we use Lemma 7 along
with (13) to deduce

!
2m2
,
PS [R Rloo > + m ] exp
2
(m(4m ) + M )
which gives (12) by setting the right hand side to and using e1 .
Once again, we notice that the bounds for the empirical error and for the leave-one-out error
are very similar. As we will see in later sections, this clearly indicates that our method is not at all
suited to the analysis of algorithms which simply perform the minimization of the empirical error
(which are not stable in the sense defined above).
4.2.2 Classification Case
In this section we consider the case where Y = {1, 1} and the algorithm A returns a function AS
that maps instances in X to labels in {1, 1}. The cost function is then simply
c(AS (x), y) = 1{yAS (x)0} .
Thus we see that because of the discrete nature of the cost function, the Uniform Stability of an
algorithm with respect to such a cost function can only be = 0 or = 1. In the first case, it means
that the algorithm is always returning the same function. In the second case there is no hope of
1
obtaining interesting bounds since we saw that we need = O( m
) for our bounds to give interesting
results.
We thus have to proceed in a different way. One possible approach is to modify our error
estimates so that they become smoother and have higher stability. The idea to smooth error
estimators to decrease their variance is not new and it has even been used in conjunction with
McDiarmids inequality by Lugosi and Pawlak (1994) in order to derive error bounds for certain
algorithms. Lugosi and Pawlak studied algorithms which produce estimates for the distributions
P (X|Y = 1) and P (X|Y = +1) and defined analogues of the resubstitution and leave-one-out
estimates of the error suited to these algorithms.
Here we will take a related, though slightly different route. Indeed, we will consider algorithm
having a real-valued output. However, we do not require this output to correspond to a posterior
probability but it should simply have the correct sign. That is, the label predicted by such an
algorithm is the sign of its real-valued output. Of course, a good algorithm will produce outputs
whose absolute value somehow represents the confidence it has in the prediction.
In order to apply the results obtained so far to this setting, we need to introduce some definitions.
Definition 14 A real-valued classification algorithm A is a learning algorithm that maps training
sets S to functions AS : X R such that the label predicted on an instance x is the sign of AS (x).
This class of algorithm includes for instance the classifiers produced by SVM or by ensemble
methods such as boosting.
Notice that the cost function defined above extends to the case where the first argument is a real
number and have the desired properties: it is zero when the algorithm does predict the right label
and 1 otherwise.
Definition 15 (Classification Stability) A real-valued classification algorithm A has classification stability if the following holds
S Z m , i {1, . . . , m}, kAS (.) AS \i (.)k .
509

(14)

Bousquet and Elisseeff

We introduce a modified cost function:

1
1 yy 0 /
c (y, y 0 ) =

for yy 0 0
for 0 yy 0
for yy 0

and we denote

` (f, z) = c (f (x), y) .

Accordingly, we define the following error estimates


m

Remp
(A, S) =

and similarly,

1 X
` (AS , zi ) ,
m i=1
m

Rloo
(A, S) =

1 X
` (AS \i , zi ) .
m i=1

The loss ` will count an error each time the function f gives an output close to zero, the closeness
being controlled by .
Lemma 16 A real-valued classification algorithm A with classification stability has uniform stability / with respect to the loss function ` .
Proof It is easy to see that c is 1/-Lipschitz with respect to its first argument and so does ` by
definition. Thus we have for all i, all training set S, and all z,
|l (AS , z) l (AS \i , z)| = |c (AS (x), y) c (AS \i (x), y)|

1
|AS (x) AS \i (x)| / .

We can thus apply Theorem 12 with the loss function ` and get the following theorem.
Theorem 17 Let A be a real-valued classification algorithm with stability . Then, for all > 0,
any m 1, and any (0, 1), with probability at least 1 over the random draw of the sample S,

ln 1/

R Remp + 2 + 4m + 1
,
(15)

2m
and with probability at least 1 over the random draw of the sample S,

ln 1/

.
R Rloo + + 4m + 1

2m

(16)

Proof We apply Theorem 12 to A with the loss function ` which is bounded by M = 1 and for
which the algorithm is /-stable. Moreover, we use the fact that R(AS ) R = Ez [l (AS , z)].
In order to make this result more practically useful, we need a statement that would hold uniformly for all values . The same techniques as in Bartlett (1996) lead to the following result:
Theorem 18 Let A be a real-valued classification algorithm with stability and B be some real
number. Then, for any m 1, and any (0, 1), with probability at least 1 over the random
draw of the sample S,
s

r
p
1
eB
4me
e

+
+1
ln 1/ + 2 ln ln
,
(17)
(0, B], R Remp + 2

2m

510

Stability and Generalization

and
(0, B], R

Rloo

e
+
+

4me
+1

1
2m

p
ln 1/ +

eB
2 ln ln

(18)

We defer the proof of this theorem to the appendix.


We can thus apply Theorem 18 with a value of which is optimized after having seen the data.

5. Stable Learning Algorithms


As seen in previous sections, our approach allowed to derive bounds on the generalization error from
the empirical and leave-one-out errors which depend on the stability of the algorithm. However,
we noticed that the bounds we obtain for the two estimators are very similar. This readily implies
that the method is suited to the study of algorithms for which the empirical error is close to the
leave-one-out error. There is thus no hope to get good bounds for algorithms which simply minimize
the empirical error since their empirical error will be very much optimistically biased compared to
their leave-one-out error.
This means that, in order to be stable in the sense defined above, a learning algorithm has to
significantly depart from an empirical risk minimizer. It thus has to accept a significant number
of training errors (which should however not be larger that the noise level). In order to generalize,
these extra training errors will thus be compensated by a decrease of the complexity of the learned
function.
In some sense, this is exactly what regularization-based algorithm do: they minimize an objective
function which is the sum of an empirical error term and a regularizing term which penalizes the
complexity of the solution. This explains why our approach is particularly well suited for the analysis
of such algorithms.
5.1 Previous Results for k-Local Rules
As an illustration of the various notions of stability, we will first study the case of k-Local Rules for
which a large number of results were obtained.
A k-Local Rule is a classification algorithm that determines the label of an instance x based on
the k closest instances in the training set. The simplest example of such a rule is the k-Nearest
Neighbors (k-NN) algorithm which computes the label by a majority vote among the labels of the k
nearest instances in the training set. Such an algorithm can be studied as a {0, 1}-valued classifier
or as a [0, 1]-valued classifier if we take into account the result of the vote.
We will consider the real-valued version of the k-NN classifier and give a result about its stability
with respect to different loss functions.
1. With respect to the {0, 1}-loss function, the k-NN classifier has hypothesis stability
r
4
k

.
m 2
This was proven in Devroye and Wagner (1979a). We will not reproduce the proof which is
quite technical but notice that a symmetry argument readily gives
P [AS (z) 6= AS \i (z)]

k
.
m

2. With respect to the absolute loss function (c(y, y 0 ) = |y y 0 |), the k-NN classifier has only a
trivial uniform stability which is the bound on the values of y.

511

Bousquet and Elisseeff

The polynomial bound that can be obtained from hypothesis stability suggests that k should be
small if one wants a good bound. This is somehow counter-intuitive since the decision seems more
robust to noise when many points are involved in the vote. There exist exponential bounds on the
leave-one-out estimate of k-NN for the {0, 1}-loss obtained by Devroye and Wagner (1979a) and for
the smoothed error estimate (i.e. with respect to the absolute loss) obtained by Lugosi and Pawlak
(1994), and these bounds do not depend on the parameter k (due to a more careful application of
McDiarmids inequality suited to the algorithm). We may then wonder in that case whether the
polynomial bounds are interesting compared to exponential ones since the latter are sharper and are
closer to intuitive interpretation. Despite this example, we believe that in general polynomial bounds
could give relevant hints about which feature of the learning algorithm leads to good generalization.
In the remainder, we will consider several algorithms that have not been studied from a stability
perspective and we will focus on their uniform stability only, which turns out to be quite good.
Obtaining results directly for their hypothesis stability remains an open problem.
5.2 Stability of Regularization Algorithms
Uniform stability may appear as a strict condition. Actually, we will see in this section that many
existing learning methods exhibit a uniform stability which is controlled by the regularization parameter and can thus be very small.
5.2.1 Stability for General Regularizers
Recall that `(f, z) = c(f (x), y). We assume in this section that F is a convex subset of a linear
space.
Definition 19 A loss function ` defined on F Y is -admissible with respect to F if the associated
cost function c is convex with respect to its first argument and the following condition holds
y1 , y2 D, y 0 Y, |c(y1 , y 0 ) c(y2 , y 0 )| |y1 y2 | ,
where D = {y : f F , x X , f (x) = y} is the domain of the first argument of c.
Thus in the case of the quadratic loss for example, this condition is verified if Y is bounded and F
is totally bounded, that is there exists M < such that
f F , kf k M and y Y, |y| M .
We introduce the objective function that the algorithm will minimize: let N : F R+ be a function
on F,
m
1 X
`(g, zj ) + N (g) ,
Rr (g) :=
(19)
m j=1
and a modified version (based on a truncated training set),
Rr\i (g) :=

1 X
`(g, zj ) + N (g) .
m

(20)

j6=i

Depending on the algorithm N will take different forms. To derive stability bounds, we need some
general results about the minimizers of (19) and (20).
Lemma 20 Let ` be -admissible with respect to F, and N a functional defined on F such that
\i
for all training sets S, Rr and Rr have a minimum (not necessarily unique) in F. Let f denote a

512

Stability and Generalization

\i

minimizer in F of Rr , and for i = 1, . . . , m, let f \i denote a minimizer in F of Rr . We have for


any t [0, 1],
t
(21)
N (f ) N (f + tf ) + N (f \i ) N (f \i tf )
|f (xi )| ,
m
where f = f \i f .
Proof
Let us introduce the notation
\i
Remp
(f ) :=

1 X
`(f, zj ) .
m
j6=i

Recall that a convex function g verifies:


x, y, t [0, 1]

g(x + t(y x)) g(x) t(g(y) g(x)) .

\i

Since c is convex, Remp is convex too and thus, t [0, 1]


\i
\i
\i
\i
(f + tf ) Remp
(f ) t(Remp
(f \i ) Remp
(f )) .
Remp

We can also get (switching the role of f and f \i ):


\i
\i
\i
\i
(f \i )) .
(f ) Remp
(f \i tf ) Remp
(f \i ) t(Remp
Remp

Summing the two preceding inequalities yields


\i
\i
\i
\i
(f \i tf ) Remp
(f ) + Remp
(f + tf ) Remp
Remp
(f \i ) 0 .

(22)

Now, by assumption we have


Rr (f ) Rr (f
\i \i
Rr (f ) Rr\i (f \i

+ tf )
tf )

0
0,

(23)
(24)

so that, summing the two previous inequalities and using (22), we get

c(f (xi ), yi ) c((f + tf )(xi ), yi ) + m N (f ) N (f + tf ) + N (f \i ) N (f \i tf ) 0 ,


and thus, by the -admissibility condition, we get

N (f ) N (f + tf ) + N (f \i ) N (f \i tf )

t
|f (xi )| .
m

In the above lemma, there is no assumption about the space F (apart from being a convex linear
\i
space) and the regularizer N apart from the existence of minima for Rr and Rr . However, most of
the practical regularization-based algorithms work with a space F that is a vector space and with
a convex regularizer. We will thus refine our previous result in this particular setting. In order to
do this, we need some standard definitions about convex functions which we deferred to Appendix
C where most of the material can be found in Rockafellar (1970) and in Gordon (1999).
Lemma 21 Under the conditions of Lemma 20, when F is a vector space and N is a proper closed
convex function from F to R {, +}, we have

1 \i

dN (f, f \i ) + dN (f \i , f )
`(f , zi ) `(f, zi ) d`(.,zi ) (f \i , f )
|f (xi )| ,
m
m
when N and ` are differentiable.

513

Bousquet and Elisseeff

Proof We start with the differentiable case and work with regular divergences. By definition of f
and f \i , we have, using (30),
dRr (f \i , f ) + dR\i (f, f \i ) = Rr (f \i ) Rr (f ) + Rr\i (f ) Rr\i (f \i ) =
r

1
1
`(f \i , zi ) `(f, zi ) .
m
m

Moreover, by the nonnegativity of divergences, we have


dR\i (f, f \i ) + dR\i (f \i , f ) 0 ,
emp

emp

which, with the previous equality and the fact that dA+B = dA + dB , gives

1 \i
dN (f, f \i ) + dN (f \i , f )
`(f , zi ) `(f, zi ) d`(.,zi ) (f \i , f ) ,
m
and we obtain the first part of the result. For the second part, we notice that
`(f \i , zi ) `(f, zi ) d`(.,zi ) (f \i , f ) `(f \i , zi ) `(f, zi ) ,
by the nonnegativity of the divergence and thus
`(f \i , zi ) `(f, zi ) d`(.,zi ) (f \i , f ) |f \i (xi ) f (xi )| ,
by the -admissibility condition.
The results in this section can be used to derive bounds on the stability of many learning algorithms. Each procedure that can be interpreted as the minimization of a regularized functional can
be analyzed with these lemmas. The only thing that will change from one procedure to another is
the regularizer N and the cost function c. In the following, we show how to apply these theorems
to different learning algorithms.
5.2.2 Application to Regularization in Hilbert Spaces
Many algorithms such as Support Vector Machines (SVM) or classical regularization networks introduced by Poggio and Girosi (1990) perform the minimization of a regularized objective function
where the regularizer is a norm in a reproducing kernel Hilbert space (RKHS):
N (f ) = kf k2k ,
where k refers to the kernel (see e.g. Wahba, 2000, or Evgeniou et al., 1999, for definitions). The
fundamental property of a RKHS F is the so-called reproducing property which writes
f F , x X , f (x) = hf, k(x, .)i .
In particular this gives by Cauchy-Schwarz inequality
f F , x X , |f (x)| kf kk

k(x, x) .

(25)

We now state a result about the uniform stability of RKHS learning.


Theorem 22 Let F be a reproducing kernel Hilbert space with kernel k such that x X , k(x, x)
2 < . Let ` be -admissible with respect to F. The learning algorithm A defined by
m

AS = arg min
gF

1 X
`(g, zi ) + kgk2k ,
m i=1

has uniform stability with respect to ` with

2 2
.
2m

514

(26)

Stability and Generalization

Proof We use the proof technique described in previous section. It can be easily checked that when
N (.) = k.kk2 we have
dN (g, g 0 ) = kg g 0 k2k .
Thus, Lemma 20 gives

|f (xi )| .
m

2kf k2k
Using (25), we get
|f (xi )| kf kk
so that

k(xi , xi ) kf kk ,

.
2m

kf kk
Now we have, by the -admissibility of `

|`(f, z) `(f \i , z)| |f (x) f \i (x)| = |f (x)| ,


which, using (25) again, gives the result.
We are now one step away from being able to apply Theorem 12. The only thing that we need is
to bound the loss function. Indeed, the -admissibility condition does not ensure the boundedness.
However, since we are in a RKHS, we can use the following simple lemma which ensures that if we
have an a priori bound on the target values y, then the boundedness condition is satisfied.
Lemma 23 Let A be the algorithm of Theorem 22 where ` is a loss function associated to a convex
cost function c(., .). We denote by B(.) a positive non-decreasing real-valued function such that for
all y D.
y 0 Y, c(y, y 0 ) B(y)
For any training set S, we have
kf kk2
and also

B(0)
,

z Z, 0 `(AS , z) B

B(0)

Moreover, ` is -admissible where can be taken as


= sup
y 0 Y

sup

 q
|y|B

B(0)

Proof We have for f = AS ,

c (y, y 0 ) .

 y

Rr (f ) Rr (~0) =

1 X ~
`(0, zi ) B(0) ,
m i=1

and also Rr (f ) kf k2k which gives the first inequality. The second inequality follows from (25).
The last one is a consequence of the definition of -admissibility.

Example 1 (Stability of bounded SVM regression) Assume k is a bounded kernel, that is


k(x, x) 2 and Y = [0, B]. Consider the loss function

0
if |f (x) y|
`(f, z) = |f (x) y| =
|f (x) y| otherwise
515

Bousquet and Elisseeff

This function is 1-admissible and we can state B(y) = B. The SVM algorithm for regression with a
kernel k can be defined as
m
1 X
AS = arg min
`(g, zi ) + kgk2k ,
gF m
i=1
and we thus get the following stability bound

2
.
2m

Moreover, by Lemma 23 we have


z Z, 0 `(AS , z)

Plugging the above into Theorem 12 gives the following bound

r !r
2
B
ln 1/
22
R Remp +
+
+
.
m

2m
Note that we consider here SVM without the bias b, which is strictly speaking different from the true
definition of SVM. The question whether b can be included in such a setting remains open.
Example 2 (Stability of soft margin SVM classification) We have Y = {1, 1}. We consider the following loss function

1 yf (x) if 1 yf (x) 0
`(f, z) = (1 yf (x))+ =
0
otherwise
which is 1-admissible. From Lemma 20, we deduce that the real-valued classification obtained by the
SVM optimization procedure has classification stability with

2
.
2m

We use Theorem 17 with = 1 and thus get

r
22
2
ln 1/
R
+ 1+
,
+
m

2m
Pm
Pm
1
1
1
1
m
where Remp
is the clipped error. It can be seen that Remp
i=1 `(f, zi ) = m
i=1 i , where the
are the Lagrange multipliers that appear in the dual formulation of the soft-margin SVM.
Note that the same remark as in the previous example holds here: there is no bias b in the
definition of the SVM.
1
Remp

Example 3 (Stability of Regularized Least Squares Regression) Again we will consider the
bounded case Y = [0, B]. The regularized least squares regression algorithm is defined by
m

AS = arg min
gF

1 X
`(g, zi ) + kgk2k ,
m i=1

where `(f, z) = (f (x) y)2 . We can state B(y) = B 2 so that ` is 2B-admissible by Lemma 23. Also
we have
r
B
.
z Z, 0 `(AS , z)

516

Stability and Generalization

The stability bound for this algorithm is thus

22 B 2
m

so that we have the generalization error bound


42 B 2
R Remp +
+
m

82 B 2
+ 2B

ln 1/
.
2m

5.2.3 Regularization by the Relative Entropy


In this section we consider algorithms that build a mixture or a weighted combination of base
hypotheses.
Lets consider a set H of functions h : X Y parameterized by some parameter :
H = {h : } .
This set is the base class from which the learning algorithm will form mixtures by averaging the
predictions of base hypotheses. More precisely, we assume that is a measurable space where
a reference measure is defined. The output of our algorithm is a mixture of element from , in
other words, it is a probability distribution over . We will thus choose F as the set of all such
probability distributions (dominated by the reference measure), defined by their density with respect
to the reference measure.
Once an element f F is chosen by the algorithm, the predictions are computed as follows
Z
y(x) =
h (x)f ()d ,

which means that the prediction produced by the algorithm is indeed a weighted combination of the
predictions of the base hypotheses, weighted by the density f . In Bayesian terms, AS would be a
posterior on computed from the observation of S and y(x) is the corresponding Bayes prediction.
By some abuse of notation, we will denote by AS both the element f F that is used by the
algorithm to weigh the base hypotheses (which can be considered as a function R) and the
prediction function x X 7 y(x).
Now we need to define a loss function on F Z. This can be done by extending a loss function
r defined on H Z with associated cost function s (r(h, z) = s(h(x), y)). There are two ways
of deriving a loss function on F. We can simply use s to compute the discrepancy between the
predicted and true labels
(27)
`(g, z) = s(
y (x), y) ,
or we can average the loss over ,
`(g, z) =

r(h , z)g()d .

(28)

The first loss is the one used when one is doing Bayesian averaging of hypotheses. The second loss
corresponds to the expected loss of a randomized algorithm that would sample h H according to
the posterior AS to perform the predictions.
In the remainder, we will focus on the second type of loss since it is easier to analyze. Note
however, that this loss will be used only to define a regularization algorithm and that the loss that
is used to measure its error may be different.
Our goal is to choose the posterior f via the minimization of a regularized objective function.
We choose some fixed density f0 and define the regularizer as
Z
g()
g() ln
d ,
N (g) = K(g, f0 ) =
f0 ()

517

Bousquet and Elisseeff

K being the Kullback-Leibler divergence or the relative entropy. In Bayesian terms, f0 would be
our prior. Now, the goal is to minimize the following objective function
m

Rr (g) =

1 X
`(g, z) + K(g, f0 ) ,
m i=1

where ` is given by (28). We can interpret the minimization of this objective function as the
computation of the Maximum A Posteriori (MAP) estimate.
Lets analyze this algorithm. We will assume that we know a bound M on the loss r(h , z).
First, notice that ` is linear in g and is thus convex and M -Lipschitz with respect to the L1 norm
Z
|`(g, z) `(g 0 , z)| M
|g() g 0 ()|d .

Thus ` is M -admissible with respect to F.


We can now state the following result on the uniform stability of the algorithm defined above.
Theorem 24 Let F defined as above and let r be any loss function defined on H Z, bounded by
M . Let f0 be a fixed member of F. When ` is defined by (28), the learning algorithm A defined by
m

AS = arg min
gF

1 X
`(g, zi ) + K(g, f0 ) ,
m i=1

(29)

has uniform stability with respect to ` with

M2
.
m

Proof Recall the following property of the relative entropy (see e.g. Cover and Thomas 1991), for
any g, g 0 ,
Z
2
1
|g() g 0 ()|d K(g, g 0 ) .
2

Moreover, the Bregman divergence associated to the relative entropy to f0 is


dK(.,f0 ) (g, g 0 ) = K(g, g 0 ) .
We saw that ` is M -admissible thus, by Lemma 21 we get
2
Z
Z
M
|f () f \i ()|d ,
|f () f \i ()|d
m

hence

M
,
m

and thus, using again the M -admissibility of `, we get for all z Z,


|f () f \i ()|d

|`(f, z) `(f \i , z)|

M2
,
m

which concludes the proof.


Now, lets consider the case of classification where Y = {1, 1}. If we use base hypotheses h that
return values in {1, 1}, it is easy to see from the proof of the above theorem that algorithm A has
M
classification stability m
. Indeed, we have
Z
Z

M
|AS () AS \i ()|d
|AS (x) AS \i (x)| = h (x)(AS () AS \i ())d
,
m

where the last inequality is derived in the proof of Theorem 24.


518

Stability and Generalization

Example 4 (Maximum Entropy Discrimination) Jaakola et al. (1999) introduce the Minimum Relative Entropy (MRE) algorithm which is a real-valued classifier obtained by minimizing
m

Rr (g) =

1 X
`(g, z) + K(g, f0 ) ,
m i=1

where the base class has two parameters H = {h, : , R} (with h, = h ) and the loss is
defined by

Z
`(g, z) =
( yh (x))g()dd
.
,R

If we have a bound B on the quantity yh (x), we see that this loss function is B-admissible
and thus by Theorem 24 (and the remark about the classification stability) we deduce that the MRE
algorithm has classification stability bounded by

B
m

6. Discussion
For regularization algorithms, we obtained bounds on the uniform stability of the order of =
1
O( m
). Plugging this result into our main theorem, we obtained bounds on the generalization error
of the following type

R Remp + O
,
m
so that we obtain non trivial results only if we can guarantee that >> 1m . This is likely to depend
on the noise in the data and no theoretical results exist that guarantee that does not decrease too
fast when m is increased.
However, it should be possible to refine our results which used sometimes quite crude bounds.
It seems reasonable that a bound like

1
R Remp + O
,
m
could be possible to obtain. This remains an open problem.
In order to better understand the distinctive feature of our bounds, we can compare them to
bounds from Structural Risk Minimization (SRM) for example on the SVM algorithm. The SVM
algorithm can be presented using the two equivalent formulations
m

min
f F

or

1 X
(1 yi f (xi ))+ + kf k2 ,
m i=1
m

min
f F

1 X
(1 yi f (xi ))+ with kf k2 ,
m i=1

The equivalence of those two problems comes from the fact that for any , there exists a such that
the solution of the two problems are the same.
The SRM principle consists in solving the second problem for several values of and then choosing
the value that minimizes a bound that depends on the VC-dimension of the set {f : kf k2 }.
However, this quantity is usually not easy to compute and only loose upper bounds can be found.
Moreover, since minimization under a constraint on the norm is not easy to perform, one typically

519

Bousquet and Elisseeff

performs the first minimization for a particular value of (chosen by cross-validation) and then uses
SRM bounds with = kf k2 . This requires the SRM bounds to hold uniformly for all values of .
This approach has led to bound which were quite predictive of the behavior but that were
quantitatively very loose.
In contrast, our approach directly focuses on the actual minimization that is performed (the
first one) and does not require the computation of a complexity measure. Indeed, the complexity is
implicitly evaluated by the actual parameter .

7. Conclusion
We explored the possibility of obtaining generalization bounds for specific algorithms from stability
properties. We introduced several notions of stability and obtained corresponding generalization
bounds with either the empirical error or the leave-one-out error. Our main result is an exponential
bound for algorithms that have good uniform stability. We then proved that regularization algorithms have such a property and that their stability is controlled by the regularization parameter .
This allowed us to obtained bounds on the generalization error of Support Vector Machines both in
the classification and in the regression framework that do not depend on the implicit VC-dimension
but rather depend explicitly on the tradeoff parameter C.
Further directions of research include the question of obtaining better bounds via uniform stability and the use of less restrictive notions of stability. Of great practical interest would be to design
algorithms that maximize their own stability.

Acknowledgements
The authors wish to thank Ryan Rifkin and Ralf Herbrich for fruitful comments that helped improved the readability and Alex Smola, Gabor Lugosi, Stephane Boucheron and Sayan Mukherjee
for stimulating discussions.

Appendix A. Proof of Lemma 9


Lets start with a generalized version of a lemma from Rogers and Wagner (1978).
Lemma 25 For any learning algorithm A, any i, j {1, . . . , m} such that i 6= j, we have

ES (R Remp )2 ES,z,z 0 [`(AS , z)`(AS , z 0 )] 2ES,z [`(AS , z)`(AS , zi )]


M
+ES [`(AS \i , zi )`(AS \j , zj )] +
ES [`(AS , zi )]
m
1
ES [`(AS , zi )`(AS , zj )] ,
m
and

ES (R Rloo )2 ES,z,z0 [`(AS , z)`(AS , z 0 )] 2ES,z [`(AS , z)`(AS \i , zi )]


i
h
M
+ES [`(AS \i , zi )`(AS \j , zj )] +
ES R\i
m
1
ES [`(AS \i , zi )`(AS \j , zj )] ,
m
Proof We have
h
i

2
ES R2 = ES Ez [`(AS , z)]
= ES [Ez [`(AS , z)] Ez0 [`(AS , z 0 )]]
= ES [Ez,z 0 [`(AS , z)`(AS , z 0 )]] ,
520

Stability and Generalization

and also
ES [RRemp ] =

ES

"

#
m
1 X
R
`(AS , zi )
m i=1

1 X
ES [R`(AS , zi )]
m i=1
m

=
=

1 X
ES,z [`(AS , z)`(AS , zi )]
m i=1

ES,z [`(AS , z)`(AS , zi )] ,

and also
ES [RRloo ] =

ES

"

1 X
R
`(AS \i , zi )
m i=1

1 X
ES [R`(AS \i , zi )]
m i=1
m

=
=

1 X
ES,z [`(AS , z)`(AS \i , zi )]
m i=1

ES,z [`(AS , z)`(AS \i , zi )] ,

for any fixed i by symmetry. Also we have


2
ES Remp

1 X
1 X
2
)
+
ES [`(AS , zi )`(AS , zj )]
`(A
,
z
E
i
S
S
m2 i=1
m2
i6=j
"
#
m
M
1 X
m1
`(AS , zi ) +
ES
ES [`(AS , zi )`(AS , zj )]
m
m i=1
m

M
m1
ES [`(AS , zi )] +
ES [`(AS , zi )`(AS , zj )] ,
m
m

and
2
ES Rloo

=
which concludes the proof.

1 X
1 X
2
E
`(A
ES [`(AS \i , zi )`(AS \j , zj )]
+
\i , zi )
S
S
m2 i=1
m2
i6=j
"
#
m
M
1 X
m1
ES
ES [`(AS \i , zi )`(AS \j , zj )]
`(AS \i , zi ) +
m
m i=1
m
h
i m1
M
ES R\i +
ES [`(AS \i , zi )`(AS \j , zj )] .
m
m

Now lets prove Lemma 9. We will use several times the fact that the random variables are i.i.d. and
we can thus interchange them without modifying the expectation (it is just a matter of renaming
them). We introduce the notation T = S \i,j and we will denote by AT,z,z 0 the result of training on
the set T z, z 0 .
Lets first formulate the first inequality of Lemma (25) as

ES (R Remp )2

1
ES [`(AS , zi ) (M `(AS , zj ))]
m
521

Bousquet and Elisseeff

+ES,z,z 0 [`(AS , z)`(AS , z 0 ) `(AS , z)`(AS , zi )]


+ES,z,z0 [`(AS , zi )`(AS , zj ) `(AS , z)`(AS , zi )]
= I1 + I 2 + I3 .
Using Schwarzs inequality we have
ES [`(AS , zi ) (M `(AS , zj ))]

ES `(AS , zi )2 ES (M `(AS , zj ))2

M 2 ES [`(AS , zi )] ES [M `(AS , zj )]
= M 2 ES [`(AS , zi )] (M ES [`(AS , zi )])
M4
,

so that we conclude
I1

M2
.
2m

Now we rewrite I2 as

ES,z,z0 `(AT,zi ,zj , z)`(AT,zi ,zj , z 0 ) `(AT,zi ,zj , z)`(AT,zi ,zj , zi )

= ES,z,z0 `(AT,zi ,zj , z)`(AT,zi ,zj , z 0 ) `(AT,zj ,z0 , z)`(AT,zj ,z0 , z 0 )


(renaming zi as z 0 in the second term)

= ES,z,z0 (`(AT,zi ,zj , z) `(AT,z,zj , z))`(AT,zi ,zj , z 0 )

+ES,z,z0 (`(AT,z,zj , z) `(AT,zj ,z 0 , z))`(AT,zi ,zj , z 0 )

+ES,z,z0 (`(AT,zi ,zj , z 0 ) `(AT,zj ,z0 , z 0 ))`(AT,zj ,z0 , z) .


Next we rewrite I3 as

ES,z,z 0 `(AT,zi ,zj , zi )`(AT,zi ,zj , zj ) `(AT,zi ,zj , z)`(AT,zi ,zj , zi )

= ES,z,z 0 `(AT,z,z0 , z)`(AT,z,z 0 , z 0 ) `(AT,zi ,zj , z)`(AT,zi ,zj , zi )


(renaming zj as z 0 and zi as z in the first term)
= ES,z,z0 [`(AT,z,z0 , z)`(AT,z,z 0 , z 0 ) `(AT,z0 ,zi , z)`(AT,z0 ,zi , z 0 )]

(exchanging zi and zj , then renaming zj as z 0 in the second term)


= ES,z,z 0 [(`(AT,z,z 0 , z 0 ) `(AT,z,zi , z 0 ))`(AT,z,z 0 , z)]
+ES,z,z 0 [(`(AT,z,z0 , z) `(AT,zi ,z0 , z))`(AT,z,zi , z 0 )]

+ES,z,z 0 [(`(AT,z,zi , z 0 ) `(AT,z0 ,zi , z 0 ))`(AT,zi ,z0 , z)]

= ES,z,z 0 (`(AT,zj ,z 0 , z 0 ) `(AT,zj ,zi , z 0 ))`(AT,zj ,z0 , zj )

+ES,z,z0 (`(AT,z,zj , z) `(AT,zi ,zj , z))`(AT,z,zi , zj )

+ES,z,z0 (`(AT,z 0 ,zj , z) `(AT,z,zj , z))`(AT,zj ,z , z 0 ) ,

where in the last line we replaced z by zj in the first term and z 0 by zj in the second term and we
exchanged z and z 0 and also zi and zj in the last term.
Summing I2 and I3 we obtain

I2 + I3 = ES,z,z 0 (`(AT,zi ,zj , z) `(AT,z,zj , z))(`(AT,zi ,zj , z 0 ) `(AT,z,zi , zj ))

+ES,z,z0 (`(AT,z,zj , z) `(AT,zj ,z0 , z))(`(AT,zi ,zj , z 0 ) `(AT,zj ,z , z 0 ))

+ES,z,z 0 (`(AT,zi ,zj , z 0 ) `(AT,zj ,z 0 , z 0 ))(`(AT,zj ,z0 , z) `(AT,zj ,z0 , zj ))

3M ES,z |`(AT,zi ,zj , z) `(AT,z,zj , z)|


= 3M ES,zi0 [|`(AS , zi ) `(AS i , zi )|] ,
522

Stability and Generalization

Which proves the first part of the bound.


For the second part, we use the same technique and slightly vary the algebra. We rewrite I2 as

ES,z,z 0 `(AT,zi ,zj , z)`(AT,zi ,zj , z 0 ) `(AT,zi ,zj , z)`(AT,zi ,zj , zi )

= ES,z,z0 `(AT,zi ,zj , z)`(AT,zi ,zj , z 0 ) `(AT,z,zj , z 0 )`(AT,z,zj , z)


(renaming zi as z and z as z 0 in the second term)

= ES,z,z0 (`(AT,zi ,zj , z 0 ) `(AT,z,zj , z 0 ))`(AT,zi ,zj , z)

+ES,z,z0 (`(AT,zi ,zj , z) `(AT,z,zj , z))`(AT,z,zj , z 0 ) .


Next we rewrite I3 as

ES,z,z0 `(AT,zi ,zj , zi )`(AT,zi ,zj , zj ) `(AT,zi ,zj , z)`(AT,zi ,zj , zi )

= ES,z,z0 `(AT,zi ,z , zi )`(AT,zi ,z , z) `(AT,zi ,zj , z)`(AT,zi ,zj , zi )


(renaming zj as z in the first term)

= ES,z,z 0 (`(AT,zi ,z , zi ) `(AT,zi ,zj , zi ))`(AT,zi ,z , z)

+ES,z,z 0 (`(AT,zi ,z , z) `(AT,zi ,zj , z))`(AT,zi ,zj , zi )

= ES,z,z 0 (`(AT,zi ,z , zi ) `(AT,zi ,zj , zi ))`(AT,zi ,z , z)

+ES,z,z 0 (`(AT,zj ,z , z) `(AT,zi ,zj , z))`(AT,zi ,zj , zj )


(exchanging zi and zj in the second term) .

Summing I2 and I3 we obtain

I2 + I3 = ES,z,z0 (`(AT,zi ,zj , z 0 ) `(AT,z,zj , z 0 ))`(AT,zi ,zj , z)

+ES,z,z0 (`(AT,zi ,z , zi ) `(AT,zi ,zj , zi ))`(AT,zi ,z , z)

+ES,z,z0 (`(AT,zj ,z , z) `(AT,zi ,zj , z))(`(AT,z,zj , z 0 ) `(AT,zi ,zj , zj ))


M ES,zi0 ,z [|`(AS , z) `(AS i , z)|] + M ES,zi0 [|`(AS , zj ) `(AS i , zj )|]
+M ES,zi0 [|`(AS , zi ) `(AS i , zi )|] .
The above concludes the proof of the bound for the empirical error.
We now turn to the leave-one-out error. The bound can be obtain in a similar way. Actually,
we notice that if we rewrite the derivation for the empirical error, we simply have to remove from
the training set the point at which the loss is computed. That is, we simply have to replace all the
quantities of the form `(AT,z,z0 , z) by `(AT,z0 , z). It is easy to see that the above results are modified
in a way that gives the correct bound for the leave-one-out error.

Appendix B. Proof of Theorem 18


First we rewrite Inequality (15) in Theorem 17 as
#
"

r
2
4m

+1
e .
PS R Remp > 2 +

2m
We introduce the following quantity

u(, ) = 2 +

4m
+1

1
,
2m

and rewrite the above bound as

> u(, ) e .
PS R Remp
523

Bousquet and Elisseeff

We define a sequence (k )k0 of real numbers such that


k = Bek .

We define k = t + 2 ln k.
Now, we use the union bound to get a statement that holds for all values in the sequence (k )k1 :
X

k
k
PS R Remp
> u(k , k )
> u(k , k )
PS k 1, R Remp
k1

ek

k1

X 1
2
2
et 2et .
k2

k1

For a given (0, B], consider the unique value k 1 such that k k1 . We thus have
k ek .
The following inequalities follow from the definition of k
1
e
,
k

k
,
Remp
Remp
s
s

B
eB
2 ln k = 2 ln ln
2 ln ln
,
k

so that we have
u(t +

e
2 ln k, k ) 2
+ (t + )

4me
+1

1
v(, t) .
2m

We thus get the following implication

k
> v(, t) R Remp
R Remp
> u(t +

2 ln k, k ) .

This reasoning thus proves that


h
i

> u(t + 2 ln k, k ) ,
> v(, t) PS k 0, R Remp
PS (0, B], R Remp

and thus

which can be written as


"

> v(, t) 2et ,


PS (0, B], R Remp

PS (0, B], R

Remp

e
+ (t + )
>2

and gives with probability 1


e

(0, B], R Remp


+2
+

ln 1/ +

4me
+1

eB
2 ln ln

#
2
1
2et ,
2m

4me
+1

1
,
2m

which gives the first inequality. The second inequality can be proven in the same way.
524

Stability and Generalization

Appendix C. Convexity
For more details see Gordon (1999) or Rockafellar (1970). A convex function F is any function from
a vector space F to R {, +} which satisfies
F (g) + (1 )F (g 0 ) F (g + (1 )g 0 ) ,
for all g, g 0 F and [0, 1]. A proper convex function is one that is always greater than and
not uniformly +. The domain of F is the set of points where F is finite. A convex function is
closed if its epigraph {(f, y) : y F (f )} is closed. The subgradient of a convex function at a point
g, written F (g) is the set of vectors a such that
F (g 0 ) F (g) + hg 0 g, ai ,
for all g 0 .
Convex functions are continuous on the interior of their domain and differentiable on the interior
of their domain except on a set of measure zero. For a convex function F we define the dual of F ,
noted F by
F (a) = sup ha, gi F (g) .
g

Denoting by F (g ) a subgradient of F in g (i.e. a member of F (g 0 )), we can define the Bregman


divergence associated to F of g to g 0 by
dF (g, g 0 ) = F (g) F (g 0 ) hg g 0 , F (g 0 )i .
When F is everywhere differentiable, this is well defined (since the subgradient is unique) and nonnegative (by the definition of the subgradient). Otherwise, we can define the generalized divergence
as
dF (g, a) = F (g) + F (a) hg, ai ,

where a F . Notice that this divergence is also nonnegative. Moreover, the fact that f is a
minimum of F in F is equivalent to
~0 F (f ) ,
which, with the following relationship
a F (g) F (g) + F (a) = hg, ai ,
gives

F (f ) + F (~0) = 0 ,

when f is a minimum of F in F.
When F is everywhere differentiable, it is easy to get
g F , dF (g, f ) = F (g) F (f ) ,

(30)

otherwise, using generalized divergences, we have


g F, dF (g, ~0) = F (g) F (f ) .

(31)

References
N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive dimensions, uniform
convergence, and learnability. Journal of the ACM, 44(4):615631, 1997.
P. Bartlett For valid generalization, the size of the weights is more important than the size of the
network Advances in Neural Information Processing Systems, 1996.
525

Bousquet and Elisseeff

J.F. Bonnans and A. Shapiro. Optimization problems with perturbation, a guided tour. Technical
Report 2872, INRIA, April 1996.
O. Bousquet and A. Elisseeff. Algorithmic stability and generalization performance. In Neural
Information Processing Systems 14, 2001.
L. Breiman. Bagging predictors. Machine Learning, 24:123140, 1996a.
L. Breiman. Heuristics of instability and stabilization in model selection. Annals of Statistics, 24
(6):23502383, 1996b.
T.M. Cover and J.A. Thomas. Elements of Information Theory. John Wiley, 1991.
L. Devroye. Exponential inequalities in nonparametric estimation. In Nonparametric Functional
Estimation and Related Topics, pages 3144. Kluwer Academic Publishers, 1991.
L. Devroye, L. Gyorfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer
Verlag, 1996.
L. Devroye and T. Wagner. Distribution-free inequalities for the deleted and holdout error estimates.
IEEE Transactions on Information Theory, 25(2):202207, 1979a.
L. Devroye and T. Wagner. Distribution-free performance bounds for potential function rules. IEEE
Transactions on Information Theory, 25(5):601604, 1979b.
T. Evgeniou and M. Pontil and T. Poggio. A unified framework for Regularization Networks
and Support Vector Machines. A.I. Memo 1654, Massachusetts Institute of Technology, Artificial
Intelligence Laboratory, December 1999.
G. Gordon. Approximate Solutions to Markov Decision Processes. PhD thesis, Carnegie Mellon
University, 1999.
T. Jaakola, M. Meila, and T. Jebara. Maximum entropy discrimination. In Neural Information
Processing Systems 12, 1999.
M. Kearns and D. Ron. Algorithmic stability and sanity-check bounds for leave-one-out crossvalidation. Neural Computation, 11(6):14271453, 1999.
G. Lugosi and M. Pawlak. On the posterior-probability estimate of the error of nonparametric
classification rules. IEEE Transactions on Information Theory, 40(2):475481, 1994.
C. McDiarmid. On the method of bounded differences. In Surveys in Combinatorics, pages 148188.
Cambridge University Press, Cambridge, 1989.
T. Poggio and F. Girosi. Regularization algorithms for learning that are equivalent to multilayer
networks. In Science, 247(2):978982, 1990.
R.T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, NJ, 1970.
W. Rogers and T. Wagner. A finite sample distribution-free performance bound for local discrimination rules. Annals of Statistics, 6(3):506514, 1978.
J.M. Steele. An Efron-Stein inequality for nonsymmetric statistics. Annals of Statistics, 14:753758,
1986.
M. Talagrand. A new look at independence. Annals of Probability, 24:134, 1996.
V.N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, 1982.
G. Wahba. An introduction to model building with reproducing kernel hilbert spaces. Technical
Report Statistics Department TR 1020, University of Wisconsin, Madison, 2000.
526

You might also like