Learning Kernel Classifiers. Theory and Algorithms
Learning Kernel Classifiers. Theory and Algorithms
Bioinformatics: The Machine Learning Approach, Pierre Baldi and Søren Brunak
Causation, Prediction, and Search, second edition, Peter Spirtes, Clark Glymour,
and Richard Scheines
Principles of Data Mining, David Hand, Heikki Mannilla, and Padhraic Smyth
Bioinformatics: The Machine Learning Approach, second edition, Pierre Baldi and
Søren Brunak
Ralf Herbrich
All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means
(including photocopying, recording, or information storage and retrieval) without permission in writing from the
publisher.
This book was set in Times Roman by the author using the LATEX document preparation system and was printed
and bound in the United States of America.
Herbrich, Ralf.
Learning kernel classifiers : theory and algorithms / Ralf Herbrich.
p. cm. — (Adaptive computation and machine learning)
Includes bibliographical references and index.
ISBN 0-262-08306-X (hc. : alk. paper)
1. Machine learning. 2. Algorithms. I. Title. II. Series.
Series Foreword xv
Preface xvii
1 Introduction 1
1.1 The Learning Problem and (Statistical) Inference 1
1.1.1 Supervised Learning . . . . . . . . . . . . . . . 3
1.1.2 Unsupervised Learning . . . . . . . . . . . . . . 6
1.1.3 Reinforcement Learning . . . . . . . . . . . . . 7
1.2 Learning Kernel Classifiers 8
1.3 The Purposes of Learning Theory 11
I LEARNING ALGORITHMS
II LEARNING THEORY
III APPENDICES
One of the most exciting recent developments in machine learning is the discovery
and elaboration of kernel methods for classification and regression. These algo-
rithms combine three important ideas into a very successful whole. From mathe-
matical programming, they exploit quadratic programming algorithms for convex
optimization; from mathematical analysis, they borrow the idea of kernel repre-
sentations; and from machine learning theory, they adopt the objective of finding
the maximum-margin classifier. After the initial development of support vector
machines, there has been an explosion of kernel-based methods. Ralf Herbrich’s
Learning Kernel Classifiers is an authoritative treatment of support vector ma-
chines and related kernel classification and regression methods. The book examines
these methods both from an algorithmic perspective and from the point of view of
learning theory. The book’s extensive appendices provide pseudo-code for all of the
algorithms and proofs for all of the theoretical results. The outcome is a volume
that will be a valuable classroom textbook as well as a reference for researchers in
this exciting area.
The goal of building systems that can adapt to their environment and learn from
their experience has attracted researchers from many fields, including computer
science, engineering, mathematics, physics, neuroscience, and cognitive science.
Out of this research has come a wide variety of learning techniques that have the
potential to transform many scientific and industrial fields. Recently, several re-
search communities have begun to converge on a common set of issues surround-
ing supervised, unsupervised, and reinforcement learning problems. The MIT Press
series on Adaptive Computation and Machine Learning seeks to unify the many di-
verse strands of machine learning research and to foster high quality research and
innovative applications.
Thomas Dietterich
Preface
Machine learning has witnessed a resurgence of interest over the last few years,
which is a consequence of the rapid development of the information industry.
Data is no longer a scarce resource—it is abundant. Methods for “intelligent”
data analysis to extract relevant information are needed. The goal of this book
is to give a self-contained overview of machine learning, particularly of kernel
classifiers—both from an algorithmic and a theoretical perspective. Although there
exist many excellent textbooks on learning algorithms (see Duda and Hart (1973),
Bishop (1995), Vapnik (1995), Mitchell (1997) and Cristianini and Shawe-Taylor
(2000)) and on learning theory (see Vapnik (1982), Kearns and Vazirani (1994),
Wolpert (1995), Vidyasagar (1997) and Anthony and Bartlett (1999)), there is no
single book which presents both aspects together in reasonable depth. Instead,
these monographs often cover much larger areas of function classes, e.g., neural
networks, decision trees or rule sets, or learning tasks (for example regression
estimation or unsupervised learning). My motivation in writing this book is to
summarize the enormous amount of work that has been done in the specific field
of kernel classification over the last years. It is my aim to show how all the work
is related to each other. To some extent, I also try to demystify some of the recent
developments, particularly in learning theory, and to make them accessible to a
larger audience. In the course of reading it will become apparent that many already
known results are proven again, and in detail, instead of simply referring to them.
The motivation for doing this is to have all these different results together in one
place—in particular to see their similarities and (conceptual) differences.
The book is structured into a general introduction (Chapter 1) and two parts,
which can be read independently. The material is emphasized through many ex-
amples and remarks. The book finishes with a comprehensive appendix containing
mathematical background and proofs of the main theorems. It is my hope that the
level of detail chosen makes this book a useful reference for many researchers
working in this field. Since the book uses a very rigorous notation systems, it is
perhaps advisable to have a quick look at the background material and list of sym-
bols on page 331.
xviii Preface
The first part of the book is devoted to the study of algorithms for learning
kernel classifiers. This part starts with a chapter introducing the basic concepts of
learning from a machine learning point of view. The chapter will elucidate the ba-
sic concepts involved in learning kernel classifiers—in particular the kernel tech-
nique. It introduces the support vector machine learning algorithm as one of the
most prominent examples of a learning algorithm for kernel classifiers. The second
chapter presents the Bayesian view of learning. In particular, it covers Gaussian
processes, the relevance vector machine algorithm and the classical Fisher discrim-
inant. The first part is complemented by Appendix D, which gives all the pseudo
code for the presented algorithms. In order to enhance the understandability of the
algorithms presented, all algorithms are implemented in R—a statistical language
similar to S-PLUS. The source code is publicly available at http://www.kernel-
machines.org/. At this web site the interested reader will also find additional
software packages and many related publications.
The second part of the book is devoted to the theoretical study of learning algo-
rithms, with a focus on kernel classifiers. This part can be read rather independently
of the first part, although I refer back to specific algorithms at some stages. The first
chapter of this part introduces many seemingly different models of learning. It was
my objective to give easy-to-follow “proving arguments” for their main results,
sometimes presented in a “vanilla” version. In order to unburden the main body,
all technical details are relegated to Appendix B and C. The classical PAC and
VC frameworks are introduced as the most prominent examples of mathematical
models for the learning task. It turns out that, despite their unquestionable gener-
ality, they only justify training error minimization and thus do not fully use the
training sample to get better estimates for the generalization error. The following
section introduces a very general framework for learning—the luckiness frame-
work. This chapter concludes with a PAC-style analysis for the particular class of
real-valued (linear) functions, which qualitatively justifies the support vector ma-
chine learning algorithm. Whereas the first chapter was concerned with bounds
which hold uniformly for all classifiers, the methods presented in the second chap-
ter provide bounds for specific learning algorithms. I start with the PAC-Bayesian
framework for learning, which studies the generalization error of Bayesian learn-
ing algorithms. Subsequently, I demonstrate that for all learning algorithms that
can be expressed as compression schemes, we can upper bound the generalization
error by the fraction of training examples used—a quantity which can be viewed
as a compression coefficient. The last section of this chapter contains a very re-
cent development known as algorithmic stability bounds. These results apply to all
algorithms for which an additional training example has only limited influence.
xix Preface
As with every book, this monograph has (almost surely) typing errors as well
as other mistakes. Therefore, whenever you find a mistake in this book, I would be
very grateful to receive an email at [email protected]. The list of
errata will be publicly available at http://www.kernel-machines.org.
This book is the result of two years’ work of a computer scientist with a
strong interest in mathematics who stumbled onto the secrets of statistics rather
innocently. Being originally fascinated by the the field of artificial intelligence, I
started programming different learning algorithms, finally ending up with a giant
learning system that was completely unable to generalize. At this stage my interest
in learning theory was born—highly motivated by the seminal book by Vapnik
(1995). In recent times, my focus has shifted toward theoretical aspects. Taking
that into account, this book might at some stages look mathematically overloaded
(from a practitioner’s point of view) or too focused on algorithmical aspects (from
a theoretician’s point of view). As it presents a snapshot of the state-of-the-art, the
book may be difficult to access for people from a completely different field. As
complementary texts, I highly recommend the books by Cristianini and Shawe-
Taylor (2000) and Vapnik (1995).
This book is partly based on my doctoral thesis (Herbrich 2000), which I wrote
at the Technical University of Berlin. I would like to thank the whole statistics
group at the Technical University of Berlin with whom I had the pleasure of
carrying out research in an excellent environment. In particular, the discussions
with Peter Bollmann-Sdorra, Matthias Burger, Jörg Betzin and Jürgen Schweiger
were very inspiring. I am particularly grateful to my supervisor, Professor Ulrich
Kockelkorn, whose help was invaluable. Discussions with him were always very
delightful, and I would like to thank him particularly for the inspiring environment
he provided. I am also indebted to my second supervisor, Professor John Shawe-
Taylor, who made my short visit at the Royal Holloway College a total success.
His support went far beyond the short period at the college, and during the many
discussions we had, I easily understood most of the recent developments in learning
theory. His “anytime availability” was of uncountable value while writing this
book. Thank you very much! Furthermore, I had the opportunity to visit the
Department of Engineering at the Australian National University in Canberra. I
would like to thank Bob Williamson for this opportunity, for his great hospitality
and for the many fruitful discussions. This book would not be as it is without the
many suggestions he had. Finally, I would like to thank Chris Bishop for giving all
the support I needed to complete the book during my first few months at Microsoft
Research Cambridge.
xx Preface
During the last three years I have had the good fortune to receive help from
many people all over the world. Their views and comments on my work were
very influential in leading to the current publication. Some of the many people I
am particularly indebted to are David McAllester, Peter Bartlett, Jonathan Bax-
ter, Shai Ben-David, Colin Campbell, Nello Cristianini, Denver Dash, Thomas
Hofmann, Neil Lawrence, Jens Matthias, Manfred Opper, Patrick Pérez, Gunnar
Rätsch, Craig Saunders, Bernhard Schölkopf, Matthias Seeger, Alex Smola, Pe-
ter Sollich, Mike Tipping, Jaco Vermaak, Jason Weston and Hugo Zaragoza. In
the course of writing the book I highly appreciated the help of many people who
proofread previous manuscripts. David McAllester, Jörg Betzin, Peter Bollmann-
Sdorra, Matthias Burger, Thore Graepel, Ulrich Kockelkorn, John Krumm, Gary
Lee, Craig Saunders, Bernhard Schölkopf, Jürgen Schweiger, John Shawe-Taylor,
Jason Weston, Bob Williamson and Hugo Zaragoza gave helpful comments on the
book and found many errors. I am greatly indebted to Simon Hill, whose help in
proofreading the final manuscript was invaluable. Thanks to all of you for your
enormous help!
Special thanks goes to one person—Thore Graepel. We became very good
friends far beyond the level of scientific cooperation. I will never forget the many
enlightening discussions we had in several pubs in Berlin and the few excellent
conference and research trips we made together, in particular our trip to Australia.
Our collaboration and friendship was—and still is—of uncountable value for me.
Finally, I would like to thank my wife, Jeannette, and my parents for their patience
and moral support during the whole time. I could not have done this work without
my wife’s enduring love and support. I am very grateful for her patience and
reassurance at all times.
Finally, I would like to thank Mel Goldsipe, Bob Prior, Katherine Innis and
Sharon Deacon Warne at The MIT Press for their continuing support and help
during the completion of the book.
1 Introduction
This chapter introduces the general problem of machine learning and how it re-
lates to statistical inference. It gives a short, example-based overview about super-
vised, unsupervised and reinforcement learning. The discussion of how to design a
learning system for the problem of handwritten digit recognition shows that kernel
classifiers offer some great advantages for practical machine learning. Not only are
they fast and simple to implement, but they are also closely related to one of the
most simple but effective classification algorithms—the nearest neighbor classi-
fier. Finally, the chapter discusses which theoretical questions are of particular, and
practical, importance.
It was only a few years after the introduction of the first computer that one
of man’s greatest dreams seemed to be realizable—artificial intelligence. It was
envisaged that machines would perform intelligent tasks such as vision, recognition
and automatic data analysis. One of the first steps toward intelligent machines is
machine learning.
The learning problem can be described as finding a general rule that explains
data given only a sample of limited size. The difficulty of this task is best compared
to the problem of children learning to speak and see from the continuous flow of
sounds and pictures emerging in everyday life. Bearing in mind that in the early
days the most powerful computers had much less computational power than a cell
phone today, it comes as no surprise that much theoretical research on the potential
of machines’ capabilities to learn took place at this time. One of the most influential
works was the textbook by Minsky and Papert (1969) in which they investigate
whether or not it is realistic to expect machines to learn complex tasks. They
found that simple, biologically motivated learning systems called perceptrons were
2 Chapter 1
Classification Learning
If the output space has no structure except whether two elements of the output
space are equal or not, this is called the problem of classification learning. Each
element of the output space is called a class. This problem emerges in virtually
any pattern recognition task. For example, the classification of images to the
classes “image depicts the digit x” where x ranges from “zero” to “nine” or the
classification of image elements (pixels) into the classes “pixel is a part of a cancer
tissue” are standard benchmark problems for classification learning algorithms (see
4 Chapter 1
Figure 1.1 Classification learning of handwritten digits. Given a sample of images from
the four different classes “zero”, “two”, “seven” and “nine” the task is to find a function
which maps images to their corresponding class (indicated by different colors of the
border). Note that there is no ordering between the four different classes.
Preference Learning
If the output space is an order space—that is, we can compare whether two
elements are equal or, if not, which one is to be preferred—then the problem of
supervised learning is also called the problem of preference learning. The elements
of the output space are called ranks. As an example, consider the problem of
learning to arrange Web pages such that the most relevant pages (according to a
query) are ranked highest (see also Figure 1.2). Although it is impossible to observe
the relevance of Web pages directly, the user would always be able to rank any pair
of documents. The mappings to be learned can either be functions from the objects
(Web pages) to the ranks, or functions that classify two documents into one of three
classes: “first object is more relevant than second object”, “objects are equivalent”
and “second object is more relevant than first object”. One is tempted to think that
we could use any classification of pairs, but the nature of ranks shows that the
represented relation on objects has to be asymmetric and transitive. That means, if
“object b is more relevant than object a” and “object c is more relevant than object
5 Introduction
Figure 1.2 Preference learning of Web pages. Given a sample of pages with different
relevances (indicated by different background colors), the task is to find an ordering of the
pages such that the most relevant pages are mapped to the highest rank.
b”, then it must follow that “object c is more relevant than object a”. Bearing this
requirement in mind, relating classification and preference learning is possible.
Function Learning
If the output space is a metric space such as the real numbers then the learning
task is known as the problem of function learning (see Figure 1.3). One of the
greatest advantages of function learning is that by the metric on the output space
it is possible to use gradient descent techniques whenever the functions value
f (x) is a differentiable function of the object x itself. This idea underlies the
back-propagation algorithm (Rumelhart et al. 1986), which guarantees the finding
of a local optimum. An interesting relationship exists between function learning
and classification learning when a probabilistic perspective is taken. Considering
a binary classification problem, it suffices to consider only the probability that a
given object belongs to the positive class. Thus, whenever we are able to learn
the function from objects to [0, 1] (representing the probability that the object is
from the positive class), we have learned implicitly a classification function by
thresholding the real-valued output at 12 . Such an approach is known as logistic
regression in the field of statistics, and it underlies the support vector machine
classification learning algorithm. In fact, it is common practice to use the real-
valued output before thresholding as a measure of confidence even when there is
no probabilistic model used in the learning process.
6 Chapter 1
3.5
3.5
3.5
3.0
3.0
3.0
2.5
2.5
2.5
y
y
2.0
2.0
2.0
1.5
1.5
1.5
1.0
1.0
1.0
−0.5 0.0 0.5 1.0 −0.5 0.0 0.5 1.0 −0.5 0.0 0.5 1.0
x x x
linear function cubic function 10th degree polynomial
Figure 1.3 Function learning in action. Given is a sample of points together with asso-
ciated real-valued target values (crosses). Shown are the best fits to the set of points using
a linear function (left), a cubic function (middle) and a 10th degree polynomial (right).
Intuitively, the cubic function class seems to be most appropriate; using linear functions
the points are under-fitted whereas the 10th degree polynomial over-fits the given sample.
densit
y
firs
t fe
e
atu
tur
fea
re
d
secon
Figure 1.4 (Left) Clustering of 150 training points (black dots) into three clusters (white
crosses). Each color depicts a region of points belonging to one cluster. (Right) Probability
density of the estimated mixture model.
hidden variables—a fact that makes estimation of the unknown probability mea-
sure quite intricate. Most of the estimation procedures used in practice fall into the
realm of expectation-maximization (EM) algorithms (Dempster et al. 1977).
49
42
38
34
image index
29
25
21
15
11
4
1 100 200 300 400 500 600 700 784
features
Figure 1.5 (Left) The first 49 digits (28 × 28 pixels) of the MNIST dataset. (Right)
The 49 images in a data matrix obtained by concatenation of the 28 rows thus resulting in
28 · 28 = 784–dimensional data vectors. Note that we sorted the images such that the four
images of “zero” are the first, then the 7 images of “one” and so on.
knowledge. On the other hand, to discover those actions the learning algorithm has
to choose actions not tried in the past and thus explore the state space. There is no
general solution to this dilemma, but that neither of the two options can lead ex-
clusively to an optimal strategy is clear. As this learning problem is only of partial
relevance to this book, the interested reader should refer Sutton and Barto (1998)
for an excellent introduction to this problem.
Figure 1.6 Classification of three new images (leftmost column) by finding the five
images from Figure 1.5 which are closest to it using the Euclidean distance.
algorithm with some example classifications of typical digits. In this particular case
it is relatively easy to acquire at least 100–1000 images and label them manually
(see Figure 1.5 (left)).
Our next decision involves the representation of the images in the computer.
Since the scanning device supplies us with an image matrix of intensity values at
fixed positions, it seems natural to use this representation directly, i.e., concatenate
the rows of the image matrix to obtain a long data vector for each image. As a
consequence, the data can be represented by a matrix X with as many rows as
number of training samples and as many columns are there are pixels per image
(see Figure 1.5 (right)). Each row xi of the data matrix X represents one image of
a digit by the intensity values at the fixed pixel positions.
Now consider a very simple learning algorithm where we just store the training
examples. In order to classify a new test image, we assign it to the class of the
training image closest to it. This surprisingly easy learning algorithm is also known
as the nearest-neighbor classifier and has almost optimal performance in the limit
of a large number of training images. In our example we see that nearest neighbor
classification seems to perform very well (see Figure 1.6). However, this simple
and intuitive algorithm suffers two major problems:
1. It requires a distance measure which must be small between images depicting
the same digit and large between images showing different digits. In the example
shown in Figure 1.6 we use the Euclidean distance
N
2
=
x − x̃ def x j − x̃ j ,
j =1
10 Chapter 1
where N = 784 is the number of different pixels. From Figure 1.6 we already
see that not all of the closest images seem to be related to the correct class, which
indicates that we should look for a better representation.
2. It requires storage of the whole training sample and the computation of the
distance to all the training samples for each classification of a new image. This be-
comes a computational problem as soon as the dataset gets larger than a few hun-
dred examples. Although the method of nearest neighbor classification performs
better for training samples of increasing size, it becomes less realizable in practice.
In order to address the second problem, we introduce ten parameterized functions
f 0 , . . . , f 9 that map image vectors to real numbers. A positive number f i (x) indi-
cates belief that the image vector is showing the digit i; its magnitude should be
related to the degree with which the image is believed to depict the digit i. The
interesting question is: Which functions should we consider? Clearly, as compu-
tational time is the only reason to deviate from nearest-neighbor classification, we
should only consider functions whose value can quickly be evaluated. On the other
hand, the functions should be powerful enough to approximate the classification as
carried out by the nearest neighbor classifier. Consider a linear function, i.e.,
N
f i (x) = wj · xj , (1.1)
j =1
which is simple and quickly computable. We summarize all the images showing
the same digit in the training sample into one parameter vector w for the function
f i . Further, by the Cauchy-Schwarz inequality, we know that the difference of
this function
evaluated at two image vectors x and x̃ is bounded from above by
w·x − x̃. Hence, if we only consider parameter vectors w with a constant norm
w, it follows that whenever two points are close to each other, any linear function
would assign similar real-values to them as well. These two properties make linear
functions perfect candidates for designing the handwritten digit recognizer.
In order to address the first problem, we consider a generalized notion of a
distance measure as given by
n
2
x − x̃ = φ j (x) − φ j x̃ . (1.2)
j =1
sider all products of intensity values at two different positions, i.e. φ (x) =
(x1 x1 , . . . , x1 x N , x2 x1 , . . . , x N x N ), which allows us to exploit correlations in
the image. The advantage of choosing a distance measure as given in equation
(1.2) becomes apparent when considering that for all parameter vectors w that
can be represented as a linear combination of the mapped training examples
φ (x1 ) , . . . , φ (xm ),
m
w= αi φ (xi ) ,
i=1
the resulting linear function in equation (1.1) can be written purely in terms of a
linear combination of inner product functions in feature space, i.e.,
m
n
m
f (x) = αi φ j (xi ) · φ j (x) = αi k (xi , x) .
i=1 j =1 i=1
k(xi ,x)
In contrast to standard linear models, we need never explicitly construct the param-
eter vector w. Specifying the inner product function k, which is called the kernel, is
sufficient. The linear function involving a kernel is known as kernel classifier and
is parameterized by the vector α ∈ Ê m of expansion coefficients. What has not yet
been addressed is the question of which parameter vector w or α to choose when
given a training sample. This is the topic of the first part of this book.
The first part of this book may lead the reader to wonder—after learning so many
different learning algorithms—which one to use for a particular problem. This
legitimate question is one that the results from learning theory try to answer.
Learning theory is concerned with the study of learning algorithms’ performance.
By casting the learning problem into the powerful framework of probability theory,
we aim to answer the following questions:
3. Given two different learning algorithms, which one should we choose for a
given training sample so as to maximize the performance of the resulting learning
algorithm?
I should point out that all these questions must be followed by the additional phrase
“with high probability over the random draw of the training sample”. This require-
ment is unavoidable and reflects the fact that we model the training sample as a
random sample. Thus, in any of the statements about the performance of learning
algorithms we have the inherent duality between precision and confidence: The
more precise the statement on the algorithm’s performance is, e.g., the prediction
error is not larger than 5%, the less confident it is. In the extreme case, we can say
that the prediction error is exactly 5%, but we have absolutely no (mathematical)
confidence in this statement. The performance measure is most easily defined when
considering supervised learning tasks. Since we are given a target value for each
object, we need only to measure by how much the learned function deviates from
the target value at all objects—in particular for the unseen objects. This quantity is
modeled by the expected loss of a function over the random draw of object-target
pairs. As a consequence our ultimate interest is in (probabilistic) upper bounds on
the expected loss of the function learned from the random training sample, i.e.,
P (training samples s.t. the expected loss of the function learned ≤ ε (δ)) ≥ 1 − δ .
The function ε is called a bound on the generalization error because it quantifies
how much we are mislead in choosing the optimal function when using a learning
algorithm, i.e., when generalizing from a given training sample to a general pre-
diction function. Having such a bound at our disposal allows us to answer the three
questions directly:
1. Since the function ε is dependent on the size of the training sample1 , we fix ε
and solve for the training sample size.
2. This is exactly the question answered by the generalization error bound. Note
that the ultimate interest is in bounds that depend on the particular training sample
observed; a bound independent of the training sample would give a guarantee ex-
ante which therefore cannot take advantage of some “simplicity” in the training
sample.
3. If we evaluate the two generalization errors for the two different learning
algorithms, we should choose the algorithm with the smaller generalization error
1 In fact, it will be inversely related because with increasing size of the training sample the expected loss will be
non-increasing due to results from large deviation theory (see Appendix A.5.2).
13 Introduction
bound. Note that the resulting bound would no longer hold for the selection
algorithm. Nonetheless, Part II of this book shows that this can be achieved with a
slight modification.
It comes as no surprise that learning theory needs assumptions to hold. In contrast
to parametric statistics, which assumes that the training data is generated from a
distribution out of a given set, the main interest in learning theory is in bounds
that hold for all possible data distributions. The only way this can be achieved is to
constrain the class of functions used. In this book, this is done by considering linear
functions only. A practical advantage of having results that are valid for all possible
probability measures is that we are able to check whether the assumptions imposed
by the theory are valid in practice. The price we have to pay for this generality is
that most results of learning theory are more an indication than a good estimate
of the real generalization error. Although recent efforts in this field aim to tighten
generalization error bound as much as possible, it will always be the case that any
distribution-dependent generalization error bound is superior in terms of precision.
Apart from enhancing our understanding of the learning phenomenon, learn-
ing theory is supposed to serve another purpose as well—to suggest new algo-
rithms. Depending on the assumption we make about the learning algorithms, we
will arrive at generalization error bounds involving different measures of (data-
dependent) complexity terms. Although these complexity terms give only upper
bounds on the generalization error, they provide us with ideas as to which quanti-
ties should be optimized. This is the topic of the second part of the book.
I Learning Algorithms
2 Kernel Classifiers from a Machine Learning
Perspective
This chapter presents the machine learning approach to learning kernel classifiers.
After a short introduction to the problem of learning a linear classifier, it shows
how learning can be viewed as an optimization task. As an example, the classical
perceptron algorithm is presented. This algorithm is an implementation of a more
general principle known as empirical risk minimization. The chapter also presents
a descendant of this principle, known as regularized (structural) risk minimization.
Both these principles can be applied in the primal or dual space of variables. It is
shown that the latter is computationally less demanding if the method is extended
to nonlinear classifiers in input space. Here, the kernel technique is the essential
method used to invoke the nonlinearity in input space. The chapter presents several
families of kernels that allow linear classification methods to be applicable even
if no vectorial representation is given, e.g., strings. Following this, the support
vector method for classification learning is introduced. This method elegantly
combines the kernel technique and the principle of structural risk minimization.
The chapter finishes with a presentation of a more recent kernel algorithm called
adaptive margin machines. In contrast to the support vector method, the latter aims
at minimizing a leave-one-out error bound rather than a structural risk.
the learning problem is called a binary classification learning task. Suppose we are
given a sample of m training objects,
x = (x1 , . . . , xm ) ∈ m ,
y = (y1 , . . . , ym ) ∈ m .
and assume that z is a sample drawn identically and independently distributed (iid)
according to some unknown probability measure PZ .
Definition 2.1 (Learning problem) The learning problem is to find the unknown
(functional) relationship h ∈ between objects x ∈ and targets y ∈
based solely on a sample z = (x, y) = ((x1 , y1 ) , . . . , (xm , ym )) ∈ ( × )m
of size m ∈ drawn iid from an unknown distribution PXY . If the output space
contains a finite number | | of elements then the task is called a classification
learning problem.
Thus, for a given object x ∈ we could evaluate the distribution PY|X=x over
classes and decide on the class ŷ ∈ with the largest probability PY|X=x ŷ .
Estimating PZ based on the given sample z, however, poses a nontrivial problem.
In the (unconstrained) class of all probability measures, the empirical measure
|{i ∈ {1, . . . , m} | z i = (x, y) }|
v z ((x, y)) = (2.2)
m
1 Though mathematically the training sample is a sequence of iid drawn object-class pairs (x, y) we sometimes
take the liberty of calling the training sample a training set. The notation z ∈ z then refers to the fact that there
exists an element z i in the sequence z such that z i = z.
19 Kernel Classifiers from a Machine Learning Perspective
assigns zero probability to all unseen objects-class pairs and thus cannot be used
for predicting further classes given a new object x ∈ . In order to resolve this
difficulty, we need to constrain the set of possible mappings from objects
x ∈ to classes y ∈ . Often, such a restriction is imposed by assuming a given
hypothesis space ⊆ of functions2 h : → . Intuitively, similar objects xi
should be mapped to the same class yi . This is a very reasonable assumption if we
wish to infer classes on unseen objects x based on a given training sample z only.
A convenient way to model similarity between objects is through an inner
product function
·, · which has the appealing property that its value is maximal
whenever its arguments are equal. In order to employ inner products to measure
similarity between objects we need to represent them in an inner product space
which we assume to be n2 (see Definition A.39).
which can assign digital images to the classes “image is a picture of 1” and “image
is not a picture of 1”. Typically, each feature φi : → Ê is the intensity of
ink at a fixed picture element, or pixel, of the image. Hence, after digitalization
at N × N pixel positions, we can represent each image as a high dimensional
vector x (to be precise, N 2 –dimensional). Obviously, only a small subset of the
N 2 –dimensional space is occupied by handwritten digits3 , and, due to noise in the
digitization, we might have the same picture x mapped to different vectors xi , x j .
This is assumed encapsulated in the probability measure PX . Moreover, for small
N , similar pictures xi ≈ x j are mapped to the same data vector x because the
single pixel positions are too coarse a representation of a single image. Thus, it
seems reasonable to assume that one could hardly find a deterministic mapping
from N 2 –dimensional vectors to the class “picture of 1”. This gives rise to a
probability measure PY|X=x . Both these uncertainties—which in fact constitute the
basis of the learning problem—are expressed via the unknown probability measure
PZ (see equation (2.1)).
In this book, we will be concerned with linear functions or classifiers only. Let us
formally define what we mean when speaking about linear classifiers.
Definition 2.4 (Linear function and linear classifier) Given a feature mapping
φ : → ⊆ n2 , the function f : → Ê of the form4
f w (x) =
φ (x) , w =
x, w
h w (x) = sign (
x, w) . (2.3)
Clearly, the intuition that similar objects are mapped to similar classes is satisfied
by such a model because, by the Cauchy-Schwarz inequality (see Theorem A.106),
we know that
w, xi − w, x j
=
w, xi − x j
≤ w · xi − x j ;
3 To see this, imagine that we generate an image by tossing a coin N 2 times and mark a black dot in a N × N
array, if the coin shows head. Then, it is very unlikely that we will obtain an image of a digit. This outcome is
expected as digits presumably have a pictorial structure in common.
4 In order to highlight the dependence of f on w, we use f w when necessary.
21 Kernel Classifiers from a Machine Learning Perspective
that is, whenever two data points are close in feature space (small xi − x j ), their
difference in the real-valued output of a hypothesis with weight vector w ∈ is
also small. It is important to note that the classification h w (x) remains unaffected
if we rescale the weight w by some positive constant,
∀λ > 0 : ∀x ∈ : sign (
x, λw) = sign (λ
x, w) = sign (
x, w) . (2.4)
Thus, if not stated otherwise, we assume the weight vector w to be of unit length,
= {x →
x, w | w ∈ } ⊆ Ê , (2.5)
= {w ∈ | w = 1 } ⊂ ,
(2.6)
= h w = sign ( f w ) | f w ∈ ⊆ .
def
(2.7)
Ergo, the set , also referred to as the hypothesis space, is isomorphic to the unit
hypersphere in Ê n (see Figure 2.1).
The task of learning reduces to finding the “best” classifier f ∗ in the hypothesis
space . The most difficult question at this point is: “How can we measure the
goodness of a classifier f ? We would like the goodness of a classifier to be
strongly dependent on the unknown measure PZ ; otherwise, we would not have
a learning problem because f ∗ could be determined without knowledge of the
underlying relationship between objects and classes expressed via PZ .
pointwise w.r.t. the object-class pairs (x, y) due to the independence assumption
made for z.
a positive, real-valued function, making the maximization task computationally
easier.
All these requirements can be encapsulated in a fixed loss function l : Ê × → Ê .
Here l ( f (x) , y) measures how costly it is when the prediction at the data point
x is f (x) but the true class is y. It is natural to assume that l (+∞, +1) =
l (−∞, −1) = 0, that is, the greater y · f (x) the better the prediction of f (x)
was.Based on the loss l it is assumed that the goodness of f is the expected loss
EXY l ( f (X) , Y) , sometimes referred to as the expected risk. In summary, the
ultimate goal of learning can be described as:
Example 2.7 (Cost matrices) Returning to Example 2.3 we see that the loss given
by equation (2.9) is inappropriate for the task at hand. This is due to the fact that
there are approximately ten times more “no pictures of 1” than “pictures of 1”.
Therefore, a classifier assigning each image to the class “no picture of 1” (this
classifier is also known as the default classifierf) would have an expected risk of
about 10%. In contrast, a classifier assigning each image to the class “picture of
1” would have an expected risk of about 90%. To correct this imbalance of prior
probabilities PY (+1) and PY (−1) one could define a 2 × 2 cost matrix
0 c12
C= .
c21 0
23 Kernel Classifiers from a Machine Learning Perspective
Figure 2.1 (Left) The hypothesis space Ï for linear classifiers in Ê3 . Each single point
x defines a plane in Ê3 and thus incurs a grand circle {w ∈ Ï |
x, w = 0 } in hypothesis
space (black lines). The three data points in the right picture induce the three planes in the
left picture.
(Right) Considering
a fixed classifier w (single dot on the left) the decision
plane x ∈ Ê3 |
x, w = 0 is shown.
Let 1 y and 1sign( f (x)) denote the 2 × 1 indicator vectors of the true class and the
classification made by f ∈ at x ∈ . Then we have a cost matrix classification
loss lC by
c12 y = +1 and f (x) < 0
lC ( f (x) , y) = 1y C1sign( f (x)) =
def
c21 y = −1 and f (x) > 0
0 otherwise .
Obviously, setting c12 = PY (−1) and c21 = PY (+1) leads to equal risks for both
default classifiers and thus allows the incorporation of prior knowledge on the
probabilities PY (+1) and PY (−1).
X 0 (w) = {x ∈ |
x, w = 0 } ,
this set is sometimes called the decision surface. Our hypothesis space for
weight vectors w is the unit hypersphere in Ên (see equation (2.6)). Hence, having
fixed x, the unit hypersphere is subdivided into three disjoint sets W+1 (x) ⊂
, W−1 (x) ⊂ and W0 (x) ⊂ by exactly the same rule, i.e.,
W y (x) = {w ∈ | sign (
x, w) = y } .
As can be seen in Figure 2.1 (left), for a finite sample x = (x1 , . . . , xm ) of training
objects and any vector y = (y1 , . . . , ym ) ∈ {−1, +1}m of labelings the resulting
equivalence classes
m
Wz = W yi (xi )
i=1
are (open) convex polyhedra. Clearly, the labeling of the x i determines the training
error of each equivalence class
W z = {w ∈ | ∀i ∈ {1, . . . , m} : sign (
xi , w) = yi } .
∞
: ( × ) m → .
m=1
In other words, the generalization error measures the deviation of the expected risk
of the function learned from the minimum expected risk.
The most well known learning principle is the empirical risk minimization (ERM)
principle. Here, we replace PZ by v z , which contains all knowledge that can be
drawn from the training sample z. As a consequence the expected risk becomes an
empirically computable quantity known as the empirical risk.
def 1 m
Remp [ f, z] = l ( f (xi ) , yi ) , (2.11)
m i=1
By construction, Remp can be minimized solely on the basis of the training sample
z. We can write any ERM algorithm in the form,
In order to be a consistent learning principle, the expected risk R [ERM (z)] must
converge to the minimum expected risk R [ f ∗ ], i.e.,
∀ε > 0 : lim PZm R ERM (Z) − R f ∗ > ε = 0 , (2.13)
m→∞
where the randomness is due to the random choice of the training sample z.
It is known that the empirical risk Remp [ f, z] of a fixed function f converges
toward R [ f ] at an exponential rate w.r.t. m for any probability measure PZ (see
Subsection A.5.2). Nonetheless, it is not clear whether this holds when we con-
sider the empirical risk minimizer ERM (z) given by equation (2.12) because this
function changes over the random choice of training samples z. We shall see in
Chapter 4 that the finiteness of the number n of feature space dimensions com-
pletely determines the consistency of the ERM principle.
The first iterative procedure for learning linear classifiers presented is the percep-
tron learning algorithm proposed by F. Rosenblatt. The learning algorithm is given
on page 321 and operates as follows:
1. At the start the weight vector w is set to 0.
2. For each training example (xi , yi ) it is checked whether the current hypothesis
correctly classifies or not. This can be achieved by evaluating the sign of yi
xi , w.
If the ith training sample is not correctly classified then the misclassified pattern xi
is added to or subtracted from the current weight vector depending on the correct
class yi . In summary, the weight vector w is updated to w + yi xi .
3. If no mistakes occur during an iteration through the training sample z the
algorithm stops and outputs w.
The optimization algorithm is a mistake-driven procedure, and it assumes the
Ï
existence of a version space V (z) ⊆ , i.e., it assumes that there exists at least
one classifier f such that Remp [ f, z] = 0.
27 Kernel Classifiers from a Machine Learning Perspective
the version space, i.e., the set of all classifiers consistent with the training sample.
In particular, for linear classifiers given by (2.5)–(2.7) we synonymously call the
set of consistent weight vectors
V (z) = {w ∈ | ∀i ∈ {1, . . . , m} : yi
xi , w > 0 } ⊆
def
Since our classifiers are linear in feature space, such training samples are called
linearly separable. In order that the perceptron learning algorithm works for any
training sample it must be ensured that the unknown probability measure PZ satis-
fies R [ f ∗ ] = 0. Viewed differently, this means that PY|X=x (y) = I y=h ∗ (x) , h ∗ ∈ ,
where h ∗ is sometimes known as the teacher perceptron. It should be noticed that
the number of parameters learned by the perceptron algorithm is n, i.e., the dimen-
sionality of the feature space . We shall call this space of parameters the primal
space, and the corresponding algorithm the primal perceptron learning algorithm.
As depicted in Figure 2.2, perceptron learning is best viewed as starting from an
arbitrary7 point w0 on the hypersphere , and each time we observe a misclas-
sification with a training example (xi , yi ), we update wt toward the misclassified
training object yi xi (see also Figure 2.1 (left)). Thus, geometrically, the perceptron
learning algorithm performs a walk through the primal parameter space with each
step made in the direction of decreasing training error. Note, however, that in the
formulation of the algorithm given on page 321 we do not normalize the weight
vector w after each update.
¾
Ü
Ü
Û·½
Ü Ü Û·½
Û
½
Ü Ü Û
Figure 2.2 A geometrical picture of the update step in the perceptron learning algorithm
in Ê2 . Evidently, xi ∈ Ê2 is misclassified by the linear classifier (dashed line) having
normal wt (solid line with arrow). Then, the update step amounts to changing wt into
wt +1 = wt + yi xi and thus yi xi “attracts” the hyperplane. After this step, the misclassified
point xi is correctly classified.
it can produce a given labeling y regardless of how bad the subsequent prediction
might be on new, as yet unseen, data points z = (x, y). This effect is also known
as overfitting, i.e., the empirical risk as given by equation (2.11) is much smaller
than the expected risk (2.8) we originally aimed at minimizing.
One way to overcome this problem is the method of regularization. In our
example this amounts to introducing a regularizer a-priori, that is, a functional
: → Ê + , and defining the solution to the learning problem to be
(z) def
= argmin Remp [ f, z] + λ [ f ] .
(2.14)
f ∈
Rreg [ f,z]
express the empirical risk as the negative log-probability of the training sample
z, given a classifier f . In general, this can be achieved by
m
PZm |F= f (z) = PY|X=xi ,F= f (yi ) PX|F= f (xi ) ,
i=1
exp (−l ( f (x) , y))
PY|X=x,F= f (y) =
ỹ∈ exp (−l ( f (x) , ỹ))
1
= exp (−l ( f (x) , y)) .
C (x)
Assuming a prior density fF ( f ) = exp (−λm [ f ]), by Bayes’ theorem we have
the posterior density
m
fF|Zm =z ( f ) ∝ exp − l ( f (xi ) , yi ) exp (−λm [ f ])
i=1
∝ exp −Remp [ f, z] − λ [ f ] .
The MAP estimate is that classifier f MAP which maximizes the last expression, i.e.,
the mode of the posterior density. Taking the logarithm we see that the choice of
a regularizer is comparable to the choice of the prior probability in the Bayesian
framework and therefore reflects prior knowledge.
In Figure 2.3 (left) the mapping is applied to the unit square [0, 1]2 and the
resulting manifold in Ê 3 is shown. Note that in this case the decision surface
X 0 (w)
31 Kernel Classifiers from a Machine Learning Perspective
2
1
1.0
x2
0
featu
0.5
re 3
0.0 1.0
0.8
−0.5
−1
0.6
0.0
e2
0.2
tur
0.4
fea
0.4
fea 0.6
tur 0.2
e1
0.8
0.0
1.0
−2
−1.0 −0.5 0.0 0.5 1.0
x1
Figure 2.3 (Left) Mapping of the unit square [0, 1]2 ⊂ Ê2 to the feature space ⊆ 32
by equation (2.15). The mapped unit square forms a two-dimensional sub-manifold in Ê3
though dim () = 3. (Right) Nine different decision surfaces obtained by varying w1 and
w3 in equation (2.16). The solid, dashed and dot-dashed lines result from varying w3 for
different values of w1 = −1, 0 and +1, respectively.
Using the notion of a kernel k we can therefore formulate the kernel perceptron
or dual perceptron algorithm as presented on page 322. Note that we can benefit
33 Kernel Classifiers from a Machine Learning Perspective
from the fact that, in each update step, we only increase the j th component of
the expansion vector α (assuming that the mistake occurred at the j th training
point). This can change the real-valued
output
xi , wt at each mapped training
object xi by only one summand y j x j , xi which requires just one evaluation of the
kernel function with all training objects. Hence, by caching the real-valued outputs
o ∈ Ê m at all training objects we see that the kernel perceptron algorithm requires
exactly 2m memory units (for the storage of the vectors α and o) and is thus suited
for large scale problems, i.e., m 1000.
By the above reasoning we see that the Gram matrix (2.18) and the m–dimensional
vector of kernel evaluations between the training objects xi and a new test object
x ∈ suffice for learning and classification, respectively. It is worth also mention-
ing that the Gram matrix and feature space are called the kernel matrix and kernel
space, respectively, as well.
The key idea of the kernel technique is to invert the chain of arguments, i.e., choose
a kernel k rather than a mapping before applying a learning algorithm. Of course,
not any symmetric function k can serve as a kernel. The necessary and sufficient
conditions of k : × → Ê to be a kernel are given by Mercer’s theorem.
Before we rephrase the original theorem we give a more intuitive characterization
of Mercer kernels.
Example 2.16 (Mercer’s theorem) Suppose our input space has a finite num-
ber of elements,
i.e., = {x1 , . . . , xr }. Then, the r × r kernel matrix K with
Ki j = k xi , x j is by definition a symmetric matrix. Consider the eigenvalue de-
composition of K = UU , where U = u1 ; . . . ; ur is an r × n matrix such that
U U = In , = diag (λ1 , . . . , λn ) , λ1 ≥ λ2 ≥ · · · ≥ λn > 0 and n ≤ r being
known as the rank of the matrix K (see also Theorem A.83 and Definition A.62).
34 Chapter 2
We have constructed a feature space and a mapping into it purely from the
kernel k. Note that λn > 0 is equivalent to assuming that K is positive semidefinite
denoted by K ≥ 0 (see Definition A.40). In order to show that K ≥ 0 is also
necessary for k to be a kernel, we assume that λn < 0. Then, the squared length of
the nth mapped object xn is
φ (xn )2 = un un = λn < 0 ,
which contradicts the geometry in an inner product space.
Then
1. (λi )i∈ ∈ 1 ,
2. ψi ∈ L ∞ ( ),
3. k can be expanded in a uniformly convergent series, i.e.,
∞
k (x, x̃ ) = λi ψi (x) ψi (x̃) (2.20)
i=1
Remarkably, Mercer’s theorem not only gives necessary and sufficient conditions
for k to be a kernel, but also suggests a constructive way of obtaining features φi
from a given kernel k. To see this, consider the mapping φ from into 2
!( ( "
φ (x) = λ1 ψ1 (x) , λ2 ψ2 (x) , . . . . (2.21)
Remark 2.19 (Mahalanobis metric) Consider kernels k such that dim () =
dim () < ∞. In order to have equal inner products in feature space and
Mercer space , we need to redefine the inner product in , i.e.,
a, b = a b ,
36 Chapter 2
So far we have seen that there are two ways of making linear classifiers nonlinear
in input space:
1. Choose a mapping φ which explicitly gives us a (Mercer) kernel k, or
2. Choose a Mercer kernel k which implicitly corresponds to a fixed mapping φ.
Though mathematically equivalent, kernels are often much easier to define and
have the intuitive meaning of serving as a similarity measure between objects
x, x̃ ∈ . Moreover, there exist simple rules for designing kernels on the basis
of given kernel functions.
The proofs can be found in Appendix B.1. The real impact of these design rules
becomes apparent when we consider the following corollary (for a proof see
Appendix B.1).
! "
3. k (x, x̃ ) = exp − k1 (x,x)−2k12σ(x,2 x̃)+k1 (x̃,x̃) , for all σ ∈ Ê+
k1 (x, x̃)
4. k (x, x̃ ) = √
k1 (x,x)·k1 (x̃, x̃)
If the input space is already an N –dimensional inner product space 2N we can
use Corollary 2.21 to construct new kernels because, according to Example A.41
at page 219, the inner product function
·, · in is already a Mercer kernel. In
Table 2.1 some commonly used families of kernels on 2N are presented. The last
column gives the number of linearly independent features φi in the induced feature
space .
The radial basis function (RBF) kernel has the appealing property that each
linear combination of kernel functions of the training objects9 x = (
x1 , . . . , xm )
m m
x − xi 2
x) =
f ( αi k (
x , xi ) = αi exp − 2
, (2.23)
i=1 i=1
2σ
can also be viewed as a density estimator in input space because it effectively
puts a Gaussian on each xi and weights its contribution to the final density by αi .
Interestingly, by the third proposition of Corollary 2.21, the weighting coefficients
αi correspond directly to the expansion coefficients for a weight vector w in a
classical linear model f ( x ) =
φ (
x ) , w. The parameter σ controls the amount
of smoothing, i.e., big values of σ lead to very flat and smooth functions f —
hence it defines the unit on which distances x − xi are measured (see Figure
2.4). The Mahalanobis kernel differs from the standard RBF kernel insofar as
9 In this subsection we use x to denote the N –dimensional vectors in input space. Note that x := φ (
x ) denotes
a mapped input object (vector) x in feature space .
38 Chapter 2
u , v) = exp − 2σ 2
v 2
u −
RBF kernel k ( ∞
σ ∈ Ê+
Mahalanobis kernel k ( u − v) (
u , v) = exp !− ( u" − v) ∞
= diag σ1−2 , . . . , σ N−2 ,
σ1 , . . . , σ N ∈ Ê+
Table 2.1 List of kernel functions over 2N . The dimensionality of the input space is N.
each axis of the input space ⊆ 2N has a separate smoothing parameter, i.e., a
separate scale onto which differences on this axis are viewed. By setting σi → ∞
we are able to eliminate the influence of the ith feature in input space. We shall
see in Section 3.2 that inference over these parameters is made in the context
of automatic relevance determination (ARD) of the features in input space (see
also Example 3.12). It is worth mentioning that RBF kernels map the input space
onto the surface
( of an infinite dimensional hypersphere because by construction
φ (
x ) = k ( x , x) = 1 for all x ∈ . Finally, by using RBF kernels we have
automatically chosen a classification model which is shift invariant, i.e., translating
the whole input space by some fixed vector a does not change anything because
a∈ :
∀ ( xi + a )2 =
x + a ) − ( x − xi 2 .
x + a − xi − a 2 =
The most remarkable advantage in using these kernels is the saving in compu-
tational effort, e.g., to calculate the inner product for pth degree complete polyno-
mial kernels we need (N + p) operations whereas an explicit mapping would
require calculations of order (exp( p ln(N/ p))). Further, for radial basis function
kernels, it is very difficult to perform the explicit mapping.
39 Kernel Classifiers from a Machine Learning Perspective
Example 2.22 (Polynomial kernel) Consider the pth degree polynomial kernel as
given in Table 2.1. In order to obtain explicit features φ : → Ê let us expand
the kernel function as follows10
p
N
N
N
u , v ) p =
(
u i vi = u i1 vi1 · · · u i p vi p
i=1 i1 =1 i p =1
10 For notational brevity, in this example we denote the i–th component of the vector u ∈ and v ∈ by u i
and vi , respectively.
40 Chapter 2
N
N
= ··· u i1 · · · u i p · vi1 · · · vi p =
φ (
u ) , φ (
v ) .
i1 =1 i p =1
φi (
u) φi (
v)
Although it seems that there are N p different features we see that two index vectors
i1 and i2 lead to the same feature φi1 = φi2 if they contain the same distinct
indices the same number of times but at different positions, e.g., i1 = (1, 1, 3)
and i2 = (1, 3, 1) both lead to φ ( u ) = u 1 u 1 u 3 = u 21 u 3 . One method of computing
the number of different features φ is to index them by an N –dimensional exponent
u ) = u r11 · · · · · u rNN . Since there are
vector r = (r1 , . . . , r N ) ∈ {0, . . . , p} N , i.e., φr (
exactly p summands we know that each admissible exponent vector r must obey
r1 + · · · + r N = p. The number of different exponent vectors r is thus exactly given
by11
N + p−1
,
p
Finally note that the complete polynomial kernel in Table 2.1 is a pth degree
polynomial kernel in an N + 1–dimensional input space by the following identity
√ √ p
u , v + c) p = u, c , v, c
(
,
11 This problem is known as the occupancy problem: Given p balls and N cells, how many different configura-
tions of occupancy numbers r1 , . . . , r N whose sum is exactly p exist? (see Feller (1950) for results).
12 To see this note that we have first to select r1 indices j1 , . . . , jr1 and set i j1 = · · · = i jr = 1. From the
1
remaining p − r1 indices select r2 indices and set them all to 2, etc. Thus, the total number of different index
vectors i leading to the same exponent vector r equals
p p − r1 p − r1 − · · · − r N −2 p!
· ··· · = ,
r1 r2 r N −1 r1 ! · · · · · r N !
which is valid because r1 + · · · + r N = p (taken from (Feller 1950)).
41 Kernel Classifiers from a Machine Learning Perspective
where we use the fact that c ≥ 0. This justifies the number of dimensions of feature
space given in the third column of Table 2.1.
Kernels on Strings
One of the greatest advantages of kernels is that they are not limited to vectorial
objects x ∈ but that they are applicable to virtually any kind of object repre-
sentation. In this subsection we will demonstrate that it is possible to efficiently
formulate computable kernels on strings. An application of string kernels is in the
analysis of DNA sequences which are given as strings composed of the symbols13
A, T, G, C. Another interesting use of kernels on strings is in the field of text cate-
gorization and classification. Here we treat each document as a sequence or string
of letters. Let us start by formalizing the notion of a string.
13 These letters correspond to the four bases Adenine, Thymine, Guanine and Cytosine.
42 Chapter 2
The most trivial feature set and corresponding kernel are obtained if we con-
sider binary features φu that indicate whether the given string matches u or not,
.
1 if u = v
φu (v) = Iu=v ⇔ k (u, v) = ,
0 otherwise
Though easy to compute, this kernel is unable to measure the similarity to any
object (string) not in the training sample and hence would not be useful for
learning.
A more commonly used feature set is obtained if we assume that we are given
a lexicon B = {b1 , . . . , bn } ⊂ ∗ of possible substrings which we will call words.
We compute the number of times the ith substring bi appears within a given string
(document). Hence, the so-called bag-of-words kernel is given by
φb (v) = βb · Ib=v[i] ⇔ k B (u, v) = βb2 Ib=u[i]=v[j] , (2.24)
i∈I b,v b∈B i∈I b,u j∈I b,v
r
φb (v) = λ|b| Ib=v[i] ⇔ kr (u, v) = λ2s Ib=u[i]=v[j] , (2.25)
i∈I b,v s=1 b∈ s i∈I b,u j∈I b,v
which can be computed using the following recursion (see Appendix B.2)
.
0 if |u 1 u| = 0
kr (u 1 u, v) = , (2.26)
kr (u, v) + |v|
j =1 λ · kr (u 1 u, v)
2
otherwise
43 Kernel Classifiers from a Machine Learning Perspective
0 if r = 0
0 if |u 1 u| = 0 or |v1 v| = 0
kr (u 1 u, v1 v) = . (2.27)
0 if u 1 = v1
1 + λ2 · kr−1 (u, v) otherwise
Since the recursion over kr invokes at most |v| times the recursion over kr (which
terminates after at most r steps) and is invoked itself exactly |u| times, the compu-
tational complexity of this string kernel is (r · |u| · |v|).
One of the disadvantages of the kernels given in equations (2.24) and (2.25)
is that each feature requires a perfect match of the substring b in the given string
v ∈ ∗ . In general, strings can suffer from deletion and insertion of symbols, e.g.,
for DNA sequences it can happen that a few bases are inserted somewhere in a
given substring b. Hence, rather than requiring b to be a substring we assume that
φb (v) only measures how often b is a subsequence of v and penalizes the non-
contiguity of b in v by using the length l (i) of the corresponding index vector i,
i.e.,
φb (v) = λl(i) ⇔ kr (u, v) = λl(i)+l(j) (2.28)
{i|b=v[i] } b∈ r {i|b=u[i] } {j|b=v[j] }
This kernel can efficiently be computed by applying the the following recursion
formula (see Appendix B.2)
.
0 if min (|uu s | , |v|) < r
kr (uu s , v) = (2.29)
kr (u, v) + λ2 {t |vt =u s } kr−1 (u, v [1 : (t − 1)])
0 if min (|uu s | , |v|) < r
kr (uu s , v) = 1 if r = 0 (2.30)
λ · k |v|− j
r (u, v) + λ λ (u, v : (t −
2
|v
{t t =u s } k r−1 [1 1)])
Clearly, the recursion for kr is invoked exactly |u| times by itself and each time
invokes at most |v| times the recursive evaluation of kr . The recursion over kr
is invoked at most r times itself and invokes at most |v| times the recursion
over kr−1 . As a consequence the computational complexity of this algorithm is
r · |u| · |v|2 . It can be shown, however, that with simple caching it is possible
to reduce the complexity further to (r · |u| · |v|).
Remark 2.25 (Ridge Problem) The kernels (2.25) and (2.28) lead to the so-called
ridge problem when applied to natural language documents, i.e., different docu-
ments u ∈ ∗ and v ∈ ∗ map to almost orthogonal features φ (u) and φ (v).
Thus, the Gram matrix has a dominant diagonal (see Figure 2.5) which is prob-
44 Chapter 2
30
30
30
column index
25
column index
25
column index
25
20
20
20
15
15
15
10
10
10
5
5
5 10 15 20 25 30 5 10 15 20 25 30 5 10 15 20 25 30
row index row index row index
Figure 2.5 Intensity plots of the normalized Gram matrices when applying the string
kernels (2.24), (2.25) and (2.28) (from left to right) to 32 sentences taken from this chapter
with n = 5 and λ = 0.5. 11, 8, 4 and 9 sentences were taken from Section 2.2, Subsection
2.2.2, Section 2.3 and Subsection 2.3.1, respectively. For the sake of clarity, white lines are
inserted to indicate the change from one section to another section.
lematic because each new test document x is likely to have a kernel value k (x, xi )
close to zero. In order to explain this we notice that a document u ∈ ∗ has at
least |u| − r + 1 matches of contiguous substrings with itself, i.e., all substrings
u [i : (i + r − 1)] for all i ∈ {1, . . . , |u| − r + 1}. However, even if two documents
u ∈ ∗ and v ∈ ∗ share all words b ∈ r of length r (on average) but in differ-
ent orders, we have approximately |u| matches (assuming |u| ≈ |v|). Therefore the
|u|
r
difference (|u| − r) − r · λ between diagonal and off-diagonal elements of the
r
A major disadvantage of the two kernel families presented so far is that they are
limited to a fixed representation of objects, x, i.e., vectorial data or strings. In order
to overcome this limitation, Jaakkola and Haussler introduced the so-called Fisher
kernel. The idea of the Fisher kernel is to use a probabilistic model of the input
data, x, to derive a similarity measure between two data items. In order to achieve
this, let us assume that the object generating probability measure PX can be written
as a mixture, i.e., there exists a vector θ = (θ 1 ; . . . ; θ r ; π ) such that14
r
r
PX (x) = PθX (x) = PθX|M=i
i
(x) · PM (i) = πi · PθX|M=i
i
(x) , (2.31)
i=1 πi i=1
14 With a slight abuse of notation, we always use PX even if X is a continuous random variable possessing a
density fX . In this case we have to replace PX by fX and PX|M=i by fX|M=i but the argument would not change.
45 Kernel Classifiers from a Machine Learning Perspective
Definition 2.26 (Fisher score and Fisher information matrix) Given a parame-
terized family of probability measures PθX over the space and a parameter
vector θ̃ ∈ the function
! "
∂ θ
(x)
ln PX
def
fθ̃ (x) =
∂θ
θ=θ̃
is called Fisher information matrix at θ̃. Note that the expectation in equation
(2.32) is w.r.t. Pθ̃X .
As a consequence, these features allow a good separation of all regions of the input
space in which the mixture measure (2.31) is high for exactly one component
only. Hence, using the Fisher score fθ (x) as a vectorial representation of x provides
a principled way of obtaining kernels from a generative probabilistic model of the
data.
46 Chapter 2
is called the Fisher kernel. The naive Fisher kernel is the simplified function
k (x, x̃ ) = (fθ (x)) fθ (x̃) .
This assumes that the Fisher information matrix I θ is the identity matrix I.
The naive Fisher kernel is practically more relevant because the computation of
the Fisher information matrix is very time consuming and sometimes not even
analytically possible. Note, however, that not only do we need a probability model
PθX of the data but also the model ⊃ PθX of probability measures.
Example 2.28 (Fisher kernel) Let us assume that the measures PX|M=i belong to
the exponential family, i.e., their density can be written as
fθX|M=i
i
(x) = ai (θ i ) · ci (x) · exp θ i τ i (x) ,
where ci : → Ê is a fixed function, τ i : → Ê ni is known as a sufficient
statistic of x and ai : Ên i → Ê is a normalization constant. Then the value of the
features φ θ j associated with the j th parameter vector θ j are given by
r
! "
∂ ln fθX (x) ∂ PM (i) · ai (θ i ) · ci (x) · exp θ i τ i (x)
1 i=1
= θ ·
∂θ j fX (x) ∂θ j
θj ∂a j (θ j )
fX|M= j (x) PM ( j ) ∂θ j
= +τ j (x) .
fθX (x) aj θ j
independent of x
15 If this relation does not hold then the features associated with π j already allow good discrimination.
47 Kernel Classifiers from a Machine Learning Perspective
that is, we effectively consider the sufficient statistic τ j (x) of the j th mixture
component measure as a vectorial representation of our data.
We have seen that kernels are a powerful tool that enrich the applicability of linear
classifiers by a large extent. Nonetheless, apart from the solution of the perceptron
learning algorithm it is not yet clear when this method can successfully be applied,
i.e., for which learning algorithms : ∪∞m=1 → the solution (z) admits a
m
def
r
s
s
r
f, g = αi β j k xi , x̃ j = β j f x̃ j = αi g (xi ) , (2.35)
i=1 j =1 j =1 i=1
where the last equality follows from the symmetry of the kernel k. Note that this
inner product
·, · is independent of the representation of the function f and g
because changing the representation
of f , i.e., changing r, α and {x1 , . . . , xr },
s
would not change j =1 β j f x̃ j (similarly for g). Moreover, we see that
48 Chapter 2
1.
f, g =
g, f for all functions f, g ∈ 0 ,
2.
c f + dg, h = c
f, h + d
g, h for all functions f, g, h ∈ 0 and all
c, d ∈ Ê ,
3.
f, f = ri=1 rj =1 αi α j k xi , x j ≥ 0 for all functions f ∈ 0 because k is
a Mercer kernel.
f, k (x, ·) = f (x) , (2.36)
0 ≤ ( f (x))2 = (
f, k (x, ·))2 ≤
f, f
k (x, ·) , k (x, ·) , (2.37)
k(x,x)
The proof is given in Appendix B.3. It elucidates once more the advantage of
kernels: Apart from limiting the computational effort in application, they allow
for a quite general class of learning algorithms (characterized by the minimization
of a functional of the form (2.38)) to be applied in dual variables α ∈ Ê m .
The methods presented in the last two sections, namely the idea of regularization,
and the kernel technique, are elegantly combined in a learning algorithm known as
support vector learning (SV learning).16 In the study of SV learning the notion of
margins is of particular importance. We shall see that the support vector machine
(SVM) is an implementation of a more general regularization principle known
as the large margin principle. The greatest drawback of SVMs, that is, the need
for zero training error, is resolved by the introduction of soft margins. We will
demonstrate how both large margin and soft margin algorithms can be viewed in
the geometrical picture given in Figure 2.1 on page 23. Finally, we discuss several
extensions of the classical SVM algorithm achieved by reparameterization.
Ü
Û
ʾ
Ü
Ê
¾
Û
Þ
Þ
Û
Ê
¾
Ê
¾
Ü
Û
ʾ
Ü
Û
Ü
ʾ
Ü
Figure 2.6 Geometrical margins of a plane (thick solid line) in Ê2 . The crosses (yi =
+1) and dots (yi = −1) represent labeled examples xi . (Left) The classifier fw with the
largest geometrical margin γ z (w). Note that this quantity is invariant under rescaling of
the weight vector. (Right)
A classifier
f w̃ with a smaller geometrical margin γ z w̃ . Since
min (xi ,yi )∈z yi xi , w̃ = 1, w̃ can be used to measure the extent of the gray zone tube by
γ z w̃ = 1/ w̃.
given training sample z. This quantity is also known as the functional margin on
the training sample z and needs to be normalized to be useful for comparison
across different weight vectors w not necessarily of unit length. More precisely,
when normalizing the real-valued outputs by the norm of the weight vector w
(which is equivalent to considering the real-valued outputs of normalized weight
vectors w/ w only) we obtain a confidence measure comparable across different
hyperplanes. The following definition introduces the different notions of margins
more formally.
def
functional margin γ̃ z (w) on a training sample z to be γ̃ z (w) = min(xi ,yi )∈z γ̃i (w),
def
geometrical margin γi (w) on an example (xi , yi ) ∈ z to be γi (w) = γ̃i (w) / w,
def
geometrical margin γ z (w) on a training sample z to be γ z (w) = γ̃ z (w) / w.
Note that γ̃i (w) > 0 implies correct classification of (xi , yi ) ∈ z. Furthermore, for
w∈ Ï the functional and geometrical margin coincide.
In 1962 Novikoff proved a theorem for perceptrons which was, in 1964, extended to
linear classifiers in kernel space. The theorem shows that the number of corrections
in the perceptron learning algorithm is provably decreasing for training samples
which admit a large margin.
The proof is given in Appendix B.4. This theorem answers one of the questions
associated with perceptron learning, that is, the number of steps until convergence.
The theorem was one of the first theoretical justifications of the idea that large
margins yield better classifiers; here in terms of mistakes during learning. We shall
see in Part II that large margins indeed yield better classifiers in terms of expected
risk.
Let and be the RKHS and feature space connected with the Mercer kernel
k, respectively. The classifier w with the largest margin γ z (w) on a given training
sample can be written as
def 1
wSVM = argmax γ z (w) = argmax γ̃ z (w) . (2.40)
w∈ Ï w∈ Ã w
Two methods of casting the problem of finding this classifier into a regularization
framework are conceivable. One method is to refine the (coarse) l0−1 loss function
given in equation (2.9) by exploiting the minimum real-valued output γ z (w) of
52 Chapter 2
Ï
each classifier w ∈ . A second option is to fix the minimum real-valued output
γ̃ z (w) of the classifier w ∈ Ã and to use the norm w of each classifier to measure
its complexity. Though the latter is better known in the SV community we shall
present both formulations.
1. Fix the norm of the classifiers to unity (as done in Novikoff’s theorem), then we
must maximize the geometrical margin. More formally, in terms of equation (2.38)
we have
Ï (z) def
= {w ∈ Ã |γ̃ z (w) = 1 } , (2.43)
which are known as canonical hyperplanes. Clearly, this definition of the hypothe-
sis space is data dependent which makes a theoretical analysis quite intricate17 . The
advantage of this formulation becomes apparent if we consider the corresponding
risk functional:
def
Here, G is the m × m Gram matrix defined by equation (2.18) and Y =
diag (y1 , . . . , ym ). Note, however, that the solution
m
wSVM = α̂i yi xi
i=1
feature space may lead to overfitting, that is, a large discrepancy between empirical
risk (which was previously zero) and true risk of a classifier. Moreover, the above
algorithm is “nonrobust” in the sense that one outlier (a training point (xi , yi ) ∈ z
whose removal would lead to a large increase in margin) can cause the learning
algorithm to converge very slowly or, even worse, make it impossible to apply at
all (if γi (w) < 0 for all w ∈ ). Ï
In order to overcome this insufficiency we introduce a heuristic which has
become known as the soft margin SVM. The idea exploited is to upper bound the
zero-one loss l0−1 as given in equation (2.9) by a linear or quadratic function (see
Figure 2.7),
l0−1 ( f (x) , y) = I−y f (x)>0 ≤ max {1 − y f (x) , 0} = llin ( f (x) , y) , (2.47)
l0−1 ( f (x) , y) = I−y f (x)>0 ≤ max {1 − y f (x) , 0} = lquad ( f (x) , y) .
2
It is worth mentioning that, due to the cut off at a real-valued output of one (on the
correct side of the decision surface), the norm f can still serve as a regularizer.
Viewed this way, the idea is in the spirit of the second parameterization of the
optimization problem of large margins (see equation (2.40)).
Linear Approximation
or equivalently
m
minimize ξi + λm w2
i=1
subject to yi
xi , w ≥ 1 − ξi i = 1, . . . , m , (2.48)
ξ ≥ 0.
Transforming this into an optimization problem involving the corresponding Wolfe
dual we must maximize an equation of the form (2.46), but this time in the “box”
0 ≤ α ≤ 2λm 1
1 (see Section B.5). In the limit λ → 0 we obtain the “hard
margin” SVM because there is no upper bound on α. Another explanation of
this equivalence is given by the fact that the objective function is proportional to
55 Kernel Classifiers from a Machine Learning Perspective
quadratic loss
4
3
loss hinge loss
2
Iyf(x)≤0
1
0
Figure 2.7 Approximation to the Heaviside step function I y f (x)≤0 (solid line) by the
so-called “hinge loss” (dashed line) and a quadratic margin loss (dotted line). The x–
axis contains the negative real-valued output −y f (x) which is positive in the case of
misclassification of x by f .
m
1
λm i=1 ξi+ w2 . Thus, in the limit of λ → 0, any w for which ξ = 0 incurs
an
minfinitely large value of the objective function and therefore in the optimum
i=1 ξi = 0. Note that by virtue of this formulation the “box” is decreased with
increasing training sample size.
Quadratic Approximation
subject to yi
xi , w ≥ 1 − ξi i = 1, . . . , m , (2.49)
ξ ≥ 0.
The corresponding Wolfe dual (derived in Section B.5) is given by
1 λm
W (α) = α 1 − α YGYα − α α,
2 2
and must be maximized in the positive quadrant 0 ≤ α. This can equivalently be
expressed by a change of the Gram matrix, i.e.,
1 = G + λmI .
W (α) = α 1 − α YGYα , G (2.50)
2
Remark 2.32 (Data independent hypothesis spaces) The two algorithms pre-
sented in this subsection use the idea of fixing the functional margin to unity.
This allows the geometrical margin to be controlled by the norm w of the weight
vector w. As we have seen in the previous subsection there also exists a “data inde-
pendent” formulation. In the case of a quadratic soft margin loss the formulation
is apparent from the change of the Gram matrix: The quadratic soft margin SVM
is equivalent to a hard margin SVM if we change the Gram matrix G to G + λmI.
Furthermore, in the hard margin case, we could alternatively have the hypothesis
space being the unit hypersphere in feature space. As a consequence thereof, all
we need to consider is the change in the feature space, if we penalize the diagonal
of the Gram matrix by λm.
Remark 2.33 (Cost matrices) In Example 2.7 we showed how different a-priori
class probabilities PY (−1) and PY (+1) can be incorporated through the use of a
cost matrix loss function. In the case of soft margin loss this can be approximately
achieved by using different values λ+ ∈ Ê + and λ− ∈ Ê + at the constraints for
the training points of class +1 and −1, respectively. As the (general) regularizer
is inversely related to the allowed violation of constraints it follows that the
underrepresented class having smaller prior probability should have the larger
λ value.
In the previous two subsections the SV learning algorithms were introduced purely
from a margin maximization perspective. In order to associate these algorithms
with the geometrical picture given in Figure 2.1 on page 23 we note that, for a
57 Kernel Classifiers from a Machine Learning Perspective
Figure 2.8 Finding the center of the largest inscribable ball in version space. (Left) In
this example four training points were given which incur the depicted four planes. Let us
assume that the labeling of the training sample was such that the polyhedra on top of the
sphere is the version space. Then, the SV learning algorithm finds the (weight) vector w on
top of the sphere as the center of the largest inscribable ball τ (w) (transparent cap). Here,
we assumed yi xi = xi to be constant. The distance of the w from the hyperplanes
(dark line) is proportional to the margin γ z (w) (see text). (Right) Viewed from top we see
that the version space V (z) is a bended convex body into which we can fully inscribe a
circle of radius proportional to γ z (w).
fixed point (xi , yi ) ∈ z, the geometrical margin γi w̃ can be read as the distance
of the linear classifier having normal w̃ to the hyperplane {w ∈ |yi
xi , w = 0 }.
In fact, the Euclidean
distance of the point w̃ from the hyperplane having normal
yi xi is yi xi , w̃ / yi xi = γi w̃ / xi . For the moment let us assume that xi is
constant for allxi in the training objects x ∈ m . Then, if a classifier f w̃ achieves
a margin of γ z w̃ on the training sample z we know that the ball,
6
7
γ z w̃
τ w̃ = w ∈
w − w̃ <
xi
⊂ V (z)
58 Chapter 2
of radius τ = γ z w̃ / x i is totally inscribable in version space V (z). Hence-
forth, maximizing γ z w̃ is equivalent to finding the center of the largest inscrib-
able ball in version space (see Figure 2.8).
The situation changes if we drop the assumption that xi is constant. In this
case, training objects for which xi is very large effectively minimize the radius
τ of the largest inscribable ball. If we consider the center of the largest inscribable
ball as an approximation to the center of mass of version space V (z) (see also
Section 3.4) we see that normalizing the xi ’s to unit length is crucial to finding a
good approximation for this point.
The geometrical intuition still holds if we consider the quadratic approximation
presented in Subsection 2.4.2. The effect of the diagonal penalization is to add a
new basis axis for each training point (xi , yi ) ∈ z. Hence, in this new space the
quadratic SVM tries to find the center of the largest inscribable ball. Needless to
say that we again assume thexi ’s to be of constant length xi . We shall see in
Section 5.1 that the margin γ z w̃ is too coarse a measure to be used for bounds on
the expected risk if xi = const.—especially if we apply the kernel technique.
The SV algorithms presented so far constitute the basis of the standard SV tool
box. There exist, however, several (heuristic) extensions for the case of multiple
classes (2 < | | < ∞), regression estimation
( = Ê ) and reparameterizations
in
terms of the assumed noise level EX 1 − max y∈ PY|X=x (y) which we present
here.
Clearly, this method learns one classifier for each of the K classes against all the
other classes and is hence known as the one-versus-rest (o-v-r) method. It can be
59 Kernel Classifiers from a Machine Learning Perspective
Using a probabilistic model for the frequencies n, different prior probabilities of the
classes y ∈ can be incorporated, resulting in better generalization ability. Instead
of solving K (K − 1) /2 separate optimization problems, it is again possible to
combine them in a single optimization problem. If the prior 2 probabilities PY ( j )
for the K classes are roughly K , the method scales as m and is independent
1
Recently, a different method for combining the single pairwise decisions has been
suggested. By specifying a directed acyclic graph (DAG) of consecutive pairwise
classifications, it is possible to introduce a class hierarchy. The leaves of such a
DAG contain the final decisions which are obtained by exclusion rather than by
voting. This method compares favorably with the o-v-o and o-v-r methods.
In the regression estimation problem we are given a sample of m real target values
t = (t1 , . . . , tm ) ∈ Ê m , rather than m classes y = (y1 , . . . , ym ) ∈ m . In order to
extend the SV learning algorithm to this task, we note that an “inversion” of the
linear loss llin suffices in order to use the SV machinery for real-valued outputs ti .
In classification the linear loss llin ( f (x) , ·) adds to the total cost, if the real-valued
output of | f (x)| is smaller than 1. For regression estimation it is desirable to have
the opposite true, i.e., incurred costs result if |t − f (x)| is very large instead of
small. This requirement is formally captured by the ε–insensitive loss
.
0 if |t − f (x)| ≤ ε
lε ( f (x) , t) = . (2.51)
|t − f (x)| − ε if |t − f (x)| > ε
60 Chapter 2
Then, one obtains a quadratic programming problem similar to (2.46), this time in
2m dual variables αi and α̃i —two corresponding to each training point constraint.
This is simply due to the fact that f can fail to attain a deviation less than ε on both
sides of the given real-valued output ti , i.e., ti − ε and ti + ε. An appealing feature
of this loss is that it leads to sparse solutions, i.e., only a few of the αi (or α̃i ) are
non-zero. For further references that cover the regression estimation problem the
interested reader is referred to Section 2.6.
A major drawback of the soft margin SV learning algorithm given in the form
(2.48) is the lack of control over how many training points will be considered
as margin errors or “outliers”, that is, how many have γ̃i (wSVM ) < 1. This is
essentially due to the fact that we fixed the functional margin to one. By a simple
reparameterization it is possible to make the functional margin itself a variable
of the optimization problem. One can show that the solution of the following
optimization problem has the property that the new parameter ν bounds the fraction
of margin errors m1 |{(xi , yi ) ∈ z | γ̃i (wSVM ) < ρ }| from above:
1 m
1
minimize ξi − νρ + w2
m i=1 2
subject to yi
xi , w ≥ ρ − ξi i = 1, . . . , m , (2.52)
ξ ≥ 0, ρ ≥ 0 .
It can be shown that, for each value of ν ∈ [0, 1], there exists a value of λ ∈ Ê+
such that the solution wν and wλ found by solving (2.52) and (2.48) have the same
geometrical margins γ z (wν ) = γ z (wλ ). Thus we could try different values of λ
in the standard linear soft margin SVM to obtain a required fraction of margin
errors. The appealing property of the problem (2.52) is that this adjustment is done
within the one optimization problem (see Section B.5). Another property which
can be proved is that, for all probability models where neither PX ({X, 1}) nor
PX ({X, −1}) contains any discrete component, ν asymptotically equals the fraction
of margin
errors. Hence, wecan incorporate prior knowledge of the noise level
EX 1 − max y∈ PY|X=x (y) via ν. Excluding all training points for which the
real-valued output is less than ρ in absolute value, the geometrical margin of the
solution on the remaining training points is ρ/ w.
61 Kernel Classifiers from a Machine Learning Perspective
Note that this quantity does not bound the expected risk of the one classifier
learned from a training sample z but the average expected risk performance of
the algorithm . For any training sample z, an almost unbiased estimator of this
quantity is given by the leave-one-out error Rloo [, z] of .
error is defined by
1 m
Rloo [, z] = l ( ((z 1 , . . . , z i−1 , z i+1 , . . . , z m )) (xi ) , yi ) .
def
m i=1
This measure counts the fraction of examples that are misclassified if we leave them
out for learning when using the algorithm . The unbiasedness of the estimator is
made more precise in the following proposition.
62 Chapter 2
1 m
W (α) = − α YGYα + J (αi ) , (2.53)
2 i=1
We can give the following bound on the leave-one-out error Rloo [W , z].
63 Kernel Classifiers from a Machine Learning Perspective
The proof is given in Appendix B.6. For support vector machines V. Vapnik has
shown that the leave-one-out error is bounded by the ratio of the number of non-
zero coefficients α̂i to the number m of training examples. The bound given in
Theorem 2.37 is slightly tighter than Vapnik’s leave-one-out bound. This is easy to
see because all training points that have α̂i = 0 cannot be leave-one-out errors in
either bound. Vapnik’s bound assumes all support vectors (all training points with
α̂i > 0) are
leave-one-out errors, whereas they only contribute as errors in equation
(2.55) if yi mj =1 α̂ j y j k xi , x j ≤ 0. In practice this means that the bound (2.55)
j =i
is tighter for less sparse solutions.
m
minimize ξi
i=1
m
subject to yi α j y j k x i , x j ≥ 1 − ξi i = 1, . . . , m , (2.56)
j =1
j =i
α ≥ 0,ξ ≥ 0.
64 Chapter 2
For further classification of new test objects we use the decision rule given in
equation (2.54). Let us study the resulting method which we call a leave-one-out
machine (LOOM).
First, the technique appears to have no free regularization parameter. This
should be compared with support vector machines, which control the amount of
regularization through the free parameter λ. For SVMs, in the case of λ → 0
one obtains a hard margin classifier with no training errors. In the case of linearly
inseparable datasets in feature space (through noise, outliers or class overlap) one
must admit some training errors (by constructing soft margins). To find the best
choice of training error/margin tradeoff one must choose the appropriate value
of λ. In leave-one-out machines a soft margin is automatically constructed. This
happens because the algorithm does not attempt to minimize the number of training
errors—it minimizes the number of training points that are classified incorrectly
even when they are removed from the linear combination which forms the decision
rule. However, if one can classify a training point correctly when it is removed
from the linear combination, then it will always be classified correctly when it is
placed back into the rule. This can be seen as αi yi k (xi , xi ) always has the same
sign as yi ; any training point is pushed further from the decision boundary by its
own component of the linear combination. Note also that summing for all j = i
in the constraint (2.56) is equivalent to setting the diagonal of the Gram matrix
G to zero and instead summing for all j . Thus, the regularization employed by
leave-one-out machines disregards the values k (xi , xi ) for all i.
Second, as for support vector machines, the solutions α̂ ∈ Ê m can be sparse
in terms of the expansion vector; that is, only some of the coefficients α̂i are non-
zero. As the coefficient of a training point does not contribute to its leave-one-out
error in constraint (2.56), the algorithm does not assign a non-zero value to the
coefficient of a training point in order to correctly classify it. A training point has
to be classified correctly by the training points of the same label that are close to it,
but the point itself makes no contribution to its own classification in training.
The core idea of the presented algorithm is to directly minimize the leave-one-out
bound. Thus, it seems that we are able to control the generalization ability of an
algorithm disregarding quantities like the margin. This is not true in general18 and
18 Part II, Section 4.3, shows that there are models of learning which allow an algorithm to directly minimize a
bound on its generalization error. This should not be confused with the possibility of controlling the generalization
error of the algorithm itself.
65 Kernel Classifiers from a Machine Learning Perspective
in particular the presented algorithm is not able to achieve this goal. There are some
pitfalls associated with minimizing a leave-one-out bound:
1. In order to get a bound on the leave-one-out error we must specify the algorithm
beforehand. This is often done by specifying the form of the objective function
which is to be maximized (or minimized) during learning. In our particular case
we see that Theorem 2.37 only considers algorithms defined by the maximization
of W (α) with the “box” constraint 0 ≤ α ≤ u. By changing the learning algorithm
to minimize the bound itself we may well develop an optimization algorithm
which is no longer compatible with the assumptions of the theorem. This is true in
particular for leave-one-out machines which are no longer in the class of algorithms
considered by Theorem 2.37—whose bound they are aimed at minimizing. Further,
instead of minimizing the bound directly we are using the hinge loss as an upper
bound on the Heaviside step function.
2. The leave-one-out bound does not provide any guarantee about the generaliza-
tion error R [, z] (see Definition 2.10). Nonetheless, if the leave-one-out error is
small then we know that, for most training samples z ∈ m , the resulting classi-
fier has to have an expected risk close to that given by the bound. This is due to
Hoeffding’s bound which says that for bounded loss (the expected risk of a hypoth-
esis f is bounded to the interval [0, 1]) the expected risk R [ (z)] of the learned
classifier (z) is close to the expectation of the expected risk (bounded by the
leave-one-out bound) with high probability over the random choice of the training
sample.19 Note, however, that the leave-one-out estimate does not provide any in-
formation about the variance of the expected risk. Such information would allow
the application of tighter bounds, for example, Chebyshev’s bound.
3. The original motivation behind the use of the leave-one-out error was to measure
the goodness of the hypothesis space and of the learning algorithm for the
learning problem given by the unknown probability measure PZ . Commonly, the
leave-one-out error is used to select among different models 1 , 2 , . . . for a given
learning algorithm . In this sense, minimizing the leave-one-out error is more a
model selection strategy than a learning paradigm within a fixed model.
possessing the smallest expected risk, i.e., find the algorithm z such that
A typical model selection task which arises in the case of kernel classifiers is the
selection of parameters of the kernel function used, for example, choosing the
optimal value of σ for RBF kernels (see Table 2.1).
Now, it is easy to see that a training point (xi , yi ) ∈ z is linearly penalized for
failing to obtain a functional margin of γ̃i (w) ≥ 1 + αi k(xi , xi ). In other words,
the larger the contribution the training point makes to the decision rule (the larger
the value of αi ), the larger its functional margin must be. Thus, the algorithm
controls the margin for each training point adaptively. From this formulation one
can generalize the algorithm to control regularization through the margin loss.
To make the margin at each training point a controlling variable we propose the
following learning algorithm:
m
minimize ξi (2.57)
i=1
m
subject to yi α j y j k xi , x j ≥ 1 − ξi + λαi k (xi , xi ) , i = 1, . . . , m.
j =1
α ≥ 0,ξ ≥ 0. (2.58)
67 Kernel Classifiers from a Machine Learning Perspective
Remark 2.39 (Clustering in feature space) In adaptive margin machines the ob-
jects xr ∈ x, which are representatives of clusters (centers) in feature space , i.e.,
those which have large kernel values w.r.t. objects from its class and small kernel
values w.r.t. objects from the other class, will have non-zero αr . In order to see this
we consider two objects, xr ∈ x and xs ∈ x, of the same class. Let us assume that
xr with ξr > 0 is the center of a cluster (w.r.t. the metric in feature space induced
by the kernel k) and s with ξs > 0 lies at the boundary of the cluster. Hence we
subdivide the set of all objects into
xi ∈ C + : ξi = 0, yi = yr , i = r, i = s ,
xi ∈ C − : ξi = 0, yi = yr ,
xi ∈ I + : ξi > 0, yi = yr , i = r, i = s ,
xi ∈ I − : ξi > 0, yi = yr .
where we assume that k (xr , xr ) = k (xs , xs ). Since the cluster centers in feature
space minimize the intra-class distance whilst maximizing the inter-class dis-
tances it becomes apparent that their αr will be higher. Taking into account that
the maximum considerable for this analysis is decreasing as λ increases we see
that, for suitable small λ, adaptive margin machines tend to only associate cluster
centers in feature space with non-zero α’s.
Linear functions have been investigated for several hundred years and it is virtually
impossible to identity their first appearance in scientific literature. In the field
of artificial intelligence, however, the first studies of linear classifiers go back to
the early works of Rosenblatt (1958), Rosenblatt (1962) and Minsky and Papert
(1969). These works also contains the first account of the perceptron learning
algorithm which was originally developed without any notion of kernels. The more
general ERM principle underpinning perceptron learning was first formulated in
Vapnik and Chervonenkis (1974). In this book we introduce perceptron learning
using the notion of version space. This somewhat misleading name comes from
Mitchell (1977), Mitchell (1982), Mitchell (1997) and refers to the fact that all
classifiers h ∈ V (z) are different “versions” of consistent classifiers. Originally,
T. Mitchell considered the hypothesis space of logic formulas only.
The method of regularization introduced in Section 2.2 was originally devel-
oped in Tikhonov and Arsenin (1977) and introduced into the machine learning
framework in Vapnik (1982). The adaptation of ill-posed problems to machine
learning can be found in Vapnik (1982) where they are termed stochastic ill-posed
problems. In a nutshell, the difference to classical ill-posed problems is that the
solution y is a random variable of which we can only observe one specific sam-
ple. As a means to solving these stochastic ill-posed problems, Vapnik suggested
structural risk minimization.
The original paper which proved Mercer’s theorem is by Mercer (1909); the
version presented in this book can be found in König (1986). Regarding Remark
69 Kernel Classifiers from a Machine Learning Perspective
2.19, the work by Wahba (1990) gives an excellent overview of covariance func-
tions of Gaussian processes and kernel functions (see also Wahba (1999)). The
detailed derivation of the feature space for polynomial kernels was first published
in Poggio (1975). In the subsection on string kernels we mentioned the possibility
of using kernels in the field of Bioinformatics; first approaches can be found in
Jaakkola and Haussler (1999b) and Karchin (2000). For a more detailed treatment
of machine learning approaches in the field of Bioinformatics see Baldi and Brunak
(1998). The notion of string kernels was independently introduced and developed
by T. Jaakkola, C. Watkins and D. Haussler in Watkins (1998), Watkins (2000)
and Haussler (1999). A detailed study of support vector machines using these ker-
nels can be found in Joachims (1998) and Lodhi et al. (2001). For more traditional
methods in information retrieval see Salton (1968). The Fisher kernel was origi-
nally introduced in Jaakkola and Haussler (1999a) and later applied to the problem
of detecting remote protein homologizes (Jaakkola et al. 1999). The motivation of
Fisher kernels in these works is much different to the one given in this book and
relies on the notion of Riemannian manifolds of probability measures.
The consideration of RKHS introduced in Subsection 2.3.3 presents another
interesting aspect of kernels, that is, that they can be viewed as regularization
operators in function approximation. By noticing that kernels are the Green’s
functions of the corresponding regularization operator we can directly go from
kernels to regularization operators and vice versa (see Smola and Schölkopf (1998),
Smola et al. (1998), Smola (1998) and Girosi (1998) for details). The original proof
of the representer theorem can be found in Schölkopf et al. (2001). A simpler
version of this theorem was already proven in Kimeldorf and Wahba (1970) and
Kivinen et al. (1997).
In Section 2.4 we introduced the support vector algorithm as a combination of
structural risk minimization techniques with the kernel trick. The first appearance
of this algorithm—which has its roots in the early 1960s (Vapnik and Lerner
1963)—is in Boser et al. (1992). The notion of functional and geometrical margins
is due to Cristianini and Shawe-Taylor (1999). For recent developments in kernel
methods and large margin classifiers the interested reader is referred to Schölkopf
et al. (1998) and Smola et al. (2000). The original perceptron convergence theorem
(without using kernels) is due to Novikoff (1962) and was independently proved
by Block (1962). The extension to general kernels was presented in Aizerman et al.
(1964).
In the derivation of the support vector algorithm we used the notion of canon-
ical hyperplanes which is due to Vapnik (1995); for more detailed derivations of
the algorithm see also Vapnik (1998), Burges (1998) and Osuna et al. (1997). An
70 Chapter 2
also been shown that support vector machines can be applied to the problem
of density estimation (Weston et al. 1999; Vapnik and Mukherjee 2000). The
reparameterization of the support vector algorithm in terms of ν, the fraction of
margin errors, was first published in Schölkopf et al. (2000) where it was also
applied to the support vector algorithm for regression estimation.
Finally, in Section 2.5, we introduce the leave-one-out error of algorithms
which motivate an algorithm called adaptive margin machines (Weston and Her-
brich 2000). The proof of the unbiasedness of the leave-one-out error can be found
in Lunts and Brailovsky (1969) and also in Vapnik (1998, p. 417). The bound on
the leave-one-out error for kernel classifiers presented in Theorem 2.37 was proven
in Jaakkola and Haussler (1999b).
3 Kernel Classifiers from a Bayesian Perspective
In the last chapter we saw that a learning problem is given by the identification
of an unknown relationship h ∈ between objects x ∈ and classes y ∈
solely on the basis of a given iid sample z = (x, y) = ((x1 , y1 ) , . . . , (xm , ym )) ∈
( × )m = m (see Definition 2.1). Any approach that deals with this problem
74 Chapter 3
starts by choosing a hypothesis space1 ⊆ and a loss function l : × → Ê
appropriate for the task at hand. Then a learning algorithm : ∪∞ m
→ aims
m=1
∗
to find the one particular hypothesis h ∈ which minimizes a pre-defined risk
determined on the basis of the loss function only, e.g., the expected risk R [h] of
the hypothesis h or the empirical risk Remp [h, z] of h ∈
on the given training
sample z ∈ m (see Definition 2.5 and 2.11). Once we have learned a classifier
(z) ∈ it is used for further classification on new test objects. Thus, all the
information contained in the given training sample is summarized in the single
hypothesis learned.
The Bayesian approach is conceptually different insofar as it starts with a mea-
sure PH over the hypotheses—also known as the prior measure—which expresses
the belief that h ∈ is the relationship that underlies the data. The notion of belief
is central to Bayesian analysis and should not be confused with more frequentistic
interpretations of the probability PH (h). In a frequentistic interpretation, PH (h) is
the relative frequency with which h underlies the data, i.e., PY|X=x (y) = Ih(x)=y ,
over an infinite number of different (randomly drawn) learning problems. As an
example consider the problem of learning to classify images of Kanji symbols al-
ways using the same set of classifiers on the images. Then PH (h) is the relative
frequency of Kanji symbols (and therefore learning tasks) for which h is the best
classifier in . Clearly, this number is difficult to determine and meaningless when
given exactly one learning problem. In contrast, a Bayesian interpretation sees the
number PH (h) as expressing the subjective belief that h ∈ models the unknown
relationship between objects and classes. As such the term “belief” is dependent on
the observer and unquestionably the “truth”—or at least the best knowledge about
the truth—for that particular observer. The link between frequentistic probabilities
and subjective beliefs is that, under quite general assumptions of rational behavior
on the basis of beliefs, both measures have to satisfy the Kolmogorov axioms, i.e.,
the same mathematical operations apply to them.
Learning in the Bayesian framework is the incorporation of the observed train-
ing sample z ∈ m in the belief expression PH . This results in a so-called posterior
measure PH|Zm =z . Compared to the single summary h ∗ ∈ obtained through the
machine learning approach, the Bayesian posterior PH|Zm =z is a much richer repre-
sentation of the information contained in the training sample z about the unknown
object-class relationship. As mentioned earlier, the Bayesian posterior PH|Zm =z is
1 In order to unburden the main text we again take the liberty of synonymously referring to , and as the
hypothesis space and to h ∈ , f ∈ and w ∈ as hypothesis, classifier or just function (see also Section 2.1
and footnotes therein).
75 Kernel Classifiers from a Bayesian Perspective
obtained by applying the rules of probability theory (see Theorem A.22), i.e.,
likelihood of h prior of h
PZm |H=h (z) PH (h) PYm |Xm =x,H=h ( y) PH (h)
∀h ∈ : PH|Zm =z (h) =
EH PZm |H=h (z)
=
EH PYm |Xm =x,H=h ( y)
, (3.1)
evidence of
where we have used the fact that PZm |H=h (z) = PYm |Xm =x,H=h ( y) PXm (x) because
hypotheses h ∈ only influence the generation of classes y ∈ m but not objects
x ∈ m . Due to the central importance of this formula—which constitutes the main
inference principle in the Bayesian framework—the three terms in equation (3.1)
deserve some discussion.
The Likelihood Let us start with the training data dependent term. Interpreted
as a function of h ∈
this term expresses how “likely” it is to observe the class
sequence y if we are given m objects x and the true relationship is h ∈ . Without
any further prior knowledge, the likelihood contains all information that can be
obtained from the training sample z about the unknown relationship2 . In the case
of learning, the notion of likelihood is defined as follows.
Definition 3.1 (Likelihood) Given a family of models PY|X=x,H=h over the space
together with an observation z = (x, y) ∈ the function : × → Ê+ is
called the likelihood of h and is defined by
In order to relate this definition to the likelihood expression given in equation (3.1)
we note that, due to the independence assumption made, it holds that
m
(h, z) = PY |X =x,H=h ( y) =
m m PY|X=xi ,H=h (yi ) .
i=1
training sample z ∈ , the more likely it is that the function h underlies the data.
This has been made more precise in the following likelihood model.
In the limiting case β → ∞ the inverse loss likelihood is a constant function, i.e.,
l (h, z) = |1 | regardless of the hypothesis h considered. In this case no additional
information is conveyed by the training sample. The likelihood obtained in the no-
noise case, i.e., β = 0, is of particular importance to us and we shall call it the
PAC-likelihood.3
The Prior The prior measure (or belief) PH is the crucial quantity in a Bayesian
analysis—it is all the knowledge about the relationship between objects and classes
before training data has arrived, encapsulated in a probability measure. Of course,
there is no general rule for determining particular priors. At the time when compu-
tational power was a scarce resource, practitioners suggested conjugate priors.
pα+i−1 (1 − p)n+β−i−1
= '1 n+β−i−1 .
0 p̂α+i−1 1 − p̂ d p̂
Another example of a conjugate prior family is the family of Gaussian measures
over the mean µ of another Gaussian measure, which will be discussed at more
length in Section 3.2.
simple
0.12
uniform
0.08
0.04
0.00
0 ݾ ݽ 1
Figure 3.1 Effect of evidence maximization. For a training set size of m = 5 we have
arranged all possible classifications y ∈ {−1, +1}5 on the interval [0, 1] by g (y) =
5 −i+1 I
i=1 2 yi =+1 and depicted two different distributions EH i
PY5 |X5 =x,Hi =h ( y) over
the space of all classifications on the 5 training objects x ∈ 5 (gray and white bars).
Since both probability mass functions sum up to one there must exist classifications y,
e.g., y1 , for which the more simple model 1 (because it explains only a small number
of classifications) has a higher evidence than the more complex model 2 . Nonetheless, if
we really observe a complex classification, e.g., y2 , then the maximization of the evidence
leads to the “correct” model 2 .
evidence for some classifications y must imply that other classifications, ỹ, lead
to a small evidence of the fixed model . Hence every hypothesis space has some
“preferred” classifications for which its evidence is high but, necessarily, also other
“non-preferred” classifications of the observed object sequence x ∈ m .
This reasoning motivates the usage of the evidence for the purpose of model
selection. We can view the choice of the hypothesis space
out of a given
set { 1 , . . . , r } a as model selection problem because it directly influences the
Bayesian inference given in equation (3.1). Using the evidence would lead to the
following model selection algorithm:
PD|Zm =z ( i ) =
PZm |D=i (z) PD ( i )
∝ PYm |Xm =x,D=i ( y) PD ( i ) , (3.3)
ED PZm |D=i (z)
because the denominator of equation (3.3) does not depend on i . Without any
prior knowledge, i.e., with a uniform measure PD , we see that the posterior belief
is directly proportional to the evidence PYm |Xm =x,D=i ( y) of the model i . As a
consequence, maximizing the evidence in the course of model selection is equiva-
lent to choosing the model with the highest posterior belief.
From a purely Bayesian point of view, for the task of learning we are finished as
soon as we have updated our prior belief PH into the posterior belief PH|Zm =z using
equation (3.1). Nonetheless, our ultimate goal is to find one (deterministic) function
h ∈ that best describes the relationship objects and classes, which is implicitly
4 We say that the hypothesis space describes the classification y at some given training points x if there exists
at least one hypothesis h ∈
which leads to a high likelihood (h, (x, y)). Using the notion of an inverse
loss likelihood this means that there exists a hypothesis h ∈ that has a small empirical risk or training error
Remp [h, (x, y)] (see also Definition 2.11).
80 Chapter 3
If we are restricted to returning a function h ∈ from a pre-specified hypothesis
space
⊆ and assume that PH|Zm =z is highly peaked around one particular
function then we determine the classifier with the maximum posterior belief.
Assuming the zero-one loss l0−1 given in equation (2.10) we see that the Bayes
optimal decision at x is given by
def
Bayes z (x) = argmax PH|Zm =z ({h ∈ | h (x) = y }) . (3.6)
y∈
It is interesting to note that, in the special case of the two-classes = {−1, +1},
we can write Bayes z as a thresholded real-valued function, i.e.,
Bayes z (x) = sign EH|Zm =z H (x) . (3.7)
If we are not restricted to returning a deterministic function h ∈ we can
consider the so-called Gibbs classification strategy.
Although this classifier is used less often in practice we will explore the full power
of this classification scheme later in Section 5.1.
In the following three sections we consider specific instances of the Bayesian prin-
ciple which result in new learning algorithms for linear classifiers. It is worth men-
tioning that the Bayesian method is not limited to the task of binary classification
learning, but can also be applied if the output space is the set of real numbers.
In this case, the learning problem is called the problem of regression estimation.
We shall see that in many cases, the regression estimation algorithm is the starting
point to obtain a classification algorithm.
In this section we are going to consider Gaussian processes both for the purpose
of regression and for classification. Gaussian processes, which were initially de-
veloped for the regression estimation case, are extended to classification by using
82 Chapter 3
the concept of latent variables and marginalization. In this sense, the regression
estimation case is much more fundamental.
= {x →
x, w | w ∈ } ,
where we assume that x = φ (x) and φ : → ⊆ n2 is a given feature mapping
def
In order to predict at a new test object x ∈ using the Bayes prediction strategy
we take into account that, by the choice of our likelihood model, we look for the
minimizer of squared loss, i.e.,
Bayes z (x) = argmin EW|Xm =x,Tm =t [l2 ( f W (x) , t)]
t ∈Ê
0 1
= argmin EW|Xm =x,Tm =t (
x, W − t)2
t ∈Ê
= EW|Xm =x,Tm =t
x, W = x, EW|Xm =x,Tm =t W , (3.10)
4 −1 5
= x, X X + σt2 In Xt ,
−1 −1
X X + σt2 In = σt−2 In − σt−4 X Im + σt−2 XX X
! −1
"
= σt−2 In − X XX + σt2 Im X .
Thus, the Bayesian prediction strategy at a given object x ∈ can be written as,
−1 ! −1 "
x X X + σt2 In X t = σt−2 x X − x X XX + σt2 Im XX t
−1
= σt−2 x X XX + σt2 Im XX + σt2 Im − XX t
−1
= x X XX + σt2 Im t. (3.11)
Note that this modification only requires us to invert a m × m matrix rather than the
n × n matrix X X + σt2 In . As a consequence, all that is needed
for
the prediction at
individual objects is the inner product function k (x, x̃ ) = x, x̃ =
φ (x) , φ (x̃)
84 Chapter 3
also known as the kernel for the mapping φ : → ⊆ n2 (see also Definition
2.14). Exploiting the notions of kernels the prediction at any x ∈ can be written
as
m
−1
f (x) = α̂i k (x, xi ) , α̂ = G + σt2 Im t, (3.12)
i=1
where the m × m matrix G = XX is defined by Gi j = k xi , x j and is called the
Gram matrix. From this expression we see that the computational effort involved in
finding the linear function from a given training sample is m 3 since it involves
the inversion of the m×m matrix G+σt2 Im . However, by exploiting the fact that, for
many kernels, the matrix G has eigenvalues λ = (λ1 , . . . , λm ) that decay quickly
2
toward
2 zero, it is possible to approximate the inversion of the matrix G+σt Im with
m computations.
In order to understand why this method is also called Gaussian process re-
gression we note that, under the assumptions made, the probability model of
the data PTm |Xm =x (t) is a Gaussian measure with mean vector 0 and covariance
XX + σt2 I = G + σt2 I (see Theorem A.28 and equations (3.8) and (3.9)). This is
the defining property of a Gaussian process.
As can be seen, Bayesian regression involving linear functions and the prior and
likelihood given in equations (3.8) and (3.9), respectively, is equivalent to mod-
eling the outputs
as a Gaussian process having mean 0 and covariance function
C (x, x̃ ) = x, x̃ + σt2 Ix=x̃ = k (x, x̃ ) + σt2 Ix=x̃ . The advantage of the Gaussian
process viewpoint is that weight vectors are avoided—we simply model the data
z = (x, t) directly. In order to derive the prediction f GP (x) of a Gaussian process
at a new object x ∈ we exploit the fact that every conditional measure of a Gaus-
sian measure is again Gaussian (see Theorem A.29). According to equation (A.12)
85 Kernel Classifiers from a Bayesian Perspective
this yields PT|Tm =t,Xm =x,X=x = Normal µt , υt2 with
m !
−1 −1 "
µt = x X G + σt2 I t= G + σt2 I t k (xi , x) , (3.13)
i
i=1
−1
υt2 = x x + σt2 − x X G + σt2 I Xx (3.14)
m m
! −1 "
= k (x, x) + σt2 − k (xi , x) · k x j , x · G + σt2 I ,
ij
i=1 j =1
by considering the joint probability of the real-valued outputs (t; t) at the training
points x ∈ m and the new test object x ∈ with covariance matrix
G + σt2 I Xx
.
x X x x + σt2
Note that the expression given in equation (3.13) equals the Bayesian prediction
strategy given in equation (3.11) or (3.12) when using a kernel. Additionally, the
Gaussian process viewpoint offers an analytical expression for the variance of the
prediction at the new test point, as given in equation (3.14). Hence, under the
assumption made, we cannot only predict the new target value at a test object but,
also judge the reliability of that prediction. It is though important to recognize that
such error bars on the prediction are meaningless if we cannot guarantee that our
Gaussian process model is appropriate for the learning problem at hand.
This equivalence allows us to view the parameter λ in the support vector classi-
fication case as an assumed noise level on the real-valued output yi
w, xi at all
the training points z i = (xi , yi ). Note that the difference in the classification case
is the thresholding of the target t ∈ Ê to obtain a binary decision y ∈ {−1, +1}.
Under the Gaussian process consideration we see that all prior knowledge has been
incorporated in the choice of a particular kernel k : × → Ê and variance
86 Chapter 3
σt2 ∈ Ê+ . In order to choose between different kernels and variances we employ the
evidence maximization principle. For a given training sample z = (x, t) of object-
target pairs we maximize the expression PTm |Xm =x (t) w.r.t. the kernel parameters
and variance σt2 . The appealing feature of the Gaussian process model is that this
expression is given in an analytical form. It is the value of the m–dimensional
Gaussian density with mean 0 and covariance matrix G + σt2 I at t ∈ Ê m . If we
consider the log-evidence given by
1!
−1 "
ln PTm |Xm =x (t) = − m ln (2π ) + ln
G + σt2 I
+ t G + σt2 I t ,
2
we see that, in the case of a differentiable kernel function k, the gradient of the log-
evidence can be computed analytically and thus standard optimization methods can
be used to find the most probable kernel parameters.
3.5
1.5
1.5
3.0
1.0
1.0
2.5
bandwidth
0.5
0.5
2.0
t(x)
t(x)
0.0
0.0
1.5
−0.5
1.0
−1.0
0.5
Figure 3.2 (Left) The log-evidence for a simple regression problem on the real line
= Ê. The x–axis varies over different values of the assumed variance σt2 whereas the
y–axis ranges over different values for the bandwidth σ in an RBF kernel (see Table 2.1).
The training sample consists of the 6 observations shown in the middle plot (dots). The
dot (•) and cross (×) depict two values at which the gradient vanishes, i.e., local maxima
of the evidence. (Middle) The estimated function corresponding to the kernel bandwidth
σ = 1.1 and variance σt2 = 0 (• in the left picture). The dotted line shows the error bars
of one standard deviation computed according to equation (3.14). Note that the variance
increases in regions where no training data is available. (Right) The estimated function
corresponding to the kernel bandwidth σ = 3 and variance σt2 = 0.5 (× in the left picture).
This local maxima is attained because all observations are assumed to be generated by the
variance component σt2 only.
we see that, for the case of σi → ∞, the ith input dimension is neglected in the
computation of the kernel and can therefore be removed from the dataset (see also
Figure
3.3). The appealing feature of using such a kernel is that the log-evidence
ln PTm |Xm =x (t) can be written as a differentiable function in the parameters
σ ∈ Ê + and thus standard maximization methods such as gradient ascent, Newton-
Raphson and conjugate gradients can be applied. Moreover, in a Bayesian spirit it
is also possible to additionally favor large values of the parameters σi by placing
an exponential prior on σi−2 .
We shall now return to our primary problem, which is classification. We are given
m classes y = (y1 , . . . , ym ) ∈ m = {−1, +1}m rather than m real-valued outputs
t = (t1 , . . . , tm ) ∈ Ê m . In order to use Gaussian processes for this purpose we are
faced with the following problem: Given a model for m real-valued outputs t ∈ Êm
how can we model 2m different binary vectors y ∈ m ?
In order to solve this problem we note that, for the purpose of classification,
we need to know the predictive distribution PY|X=x,Zm =z (y) where z = (x, y) is
88 Chapter 3
5 0
functi
functi
0 −5
on
on
value
−10
value
−5
s
−15
s
10 10
−10
5 5
n2
n2
−10 −10
sio
sio
−5 0 −5 0
en
en
inp inp
dim
dim
ut ut
dim 0 dim 0
en −5 en −5
ut
ut
s s
inp
inp
ion 5 ion 5
1 1
10 −10 10 −10
√
Figure 3.3 (Left) A function fw sampled from the ARD prior with σ1 = σ2 = 5 where
= Ê2 . Considering the 1–D functions over the second input dimension for fixed values
of the first input dimension, we see that the functions change slowly only for nearby values
of the first input dimension. The size of the neighborhood is determined by the choice of
σ1 and σ2 . (Right) A function f w sampled from the ARD prior with σ1 = 20σ2 . As can
be seen the function is only changing very slowly over the first input dimension. In the
limiting case σ1 → ∞ any sample fw is a function of the second input dimension only.
the full training sample of object-class pairs. Given the predictive distribution at
a new test object x ∈ we decide on the class y with maximum probability
PY|X=x,Zm =z (y). The trick which enable the use of a regression estimation method
such as Gaussian processes is the introduction of a latent random variable T
which has influence on the conditional class probability PY|X=x . As we saw in
the last subsection, each prediction f GP (x) of a Gaussian process at some test
object x ∈ can be viewed as the real-valued output of a mean weight vector
wcm = EW|Xm =x,Tm =t W in some fixed feature space (see equation (3.10)), i.e.,
the distance to the hyperplane with the normal vector wcm . Intuitively, the further
away a test object x ∈ is from the hyperplane (the larger the value of t), the
more likely it is that the object is from the class y = sign (t). One way to model
this intuitive notion is by
exp β −1 · yt exp 2β −1 · yt
PY|T=t (y) = = , (3.16)
exp β −1 · yt + exp −β −1 · yt 1 + exp 2β −1 · yt
89 Kernel Classifiers from a Bayesian Perspective
1.0
1.0
β=0.1
1.5
β=1.0
β=5.0
0.8
0.8
1.0
PY|X=x(+1)
0.5
0.6
0.6
π(t)
t(x)
0.0
0.4
0.4
−0.5
0.2
0.2
−1.0
0.0
0.0
0 2 4 6 8 −1.5 −0.5 0.5 1.0 1.5 0 2 4 6 8
x t x
Figure 3.4 Latent variable model for classification with Gaussian processes. Each real-
valued function (left) is “transfered” through a sigmoid given by equation (3.16) (middle
plot). As a result we obtain the predictive distribution PY|X=x,T=t (+1) for the class +1
as a function of the inputs (right). By increasing the noise parameter β we get smoother
functions g (x) = PY|X=x,T=t (+1). In the limit of β → 0 the predictive distribution
becomes a zero-one valued function.
where β can be viewed as a noise level, i.e., for limβ→0 PY|T=t (y) = I yt ≥0 (see also
Definition 3.2 and Figure 3.4). In order to exploit the latent random variables we
marginalize over all their possible values (t, t) ∈ Ê m+1 at the m training objects
x ∈ m and the test object x ∈ , i.e.,
PY|X=x,Zm =z (y) = ETm+1 |X=x,Zm =z PY|X=x,Zm =z,Tm+1 =(t,t )
& &
= PY|T=t (y) fTm+1 |X=x,Zm =z ((t, t)) d t dt . (3.17)
Ê Êm
A problem arises with this integral due to the non-Gaussianity of the term
PY|T=t (y) meaning that the integrand fTm+1 |X=x,Zm =z is no longer a Gaussian den-
sity and, thus, it becomes analytically intractable. There are several routes we can
take:
1. By assuming that fTm+1 |X=x,Zm =z is a uni-modal function in (t, t) ∈ Ê m+1 we
can consider its Laplace approximation. In place of the correct density we use
an (m + 1)-dimensional Gaussian measure with mode µ ∈ Ê m+1 and covariance
∈ Ê( m+1)×(m+1) given by
µ = argmax fTm+1 |X=x,Zm =z ((t, t)) (3.18)
(t,t )∈Êm+1
90 Chapter 3
m+1,m+1 −1
∂ 2
ln f |X=x,Z =z ((t, t))
= −
T m+1 m
. (3.19)
∂ti ∂t j
ti =µi ,t j =µ j i, j =1
2. We can use a Markov chain to sample from PTm+1 |X=x,Zm =z and use a Monte
Carlo approximation to the integral. So, given K samples (t 1 , t1 ) , . . . , (t K , t K ) we
approximate the predictive distribution by averaging over the samples
1 K
PY|X=x,Zm =z (y) ≈ PY|T=ti (y) .
K i=1
Having found this vector using an iterative Newton-Raphson update we can then
compute tˆ directly using tˆ = t̂ G−1 Xx. As a consequence, by Theorem A.29, and
the results from Appendix B.7, it follows that
! "
PT|X=x,Zm =z = Normal t̂ G−1 Xx, x x − x X (I + PG)−1 PXx = Normal tˆ, υ 2 ,
where P is a m × m diagonal matrix with entries β −1 · PY|T=tˆi (1) 1 − PY|T=tˆi (1) .
The benefit of this consideration is that the problem of determining the predictive
distribution (3.17) reduces to computing
&
PY|X=x,Zm =z (y) = PY|T=t (y) fT|X=x,Zm =z (t) dt , (3.21)
Ê
which is now computationally feasible because fT|X=x,Zm =z is a normal density only
depending on the two parameters tˆ and υ 2 . In practice, we would approximate
the function PY|T=t by Gaussian densities to be able to evaluate this expression
numerically. However, if all we need is the classification, we exploit the fact
91 Kernel Classifiers from a Bayesian Perspective
that sign tˆ always equals the class y ∈ {−1, +1} with the larger probability
PY|X=x,Zm =z (y) (see Appendix B.7). In this case it suffices to compute the vector
t̂ ∈ Ê m using equation (3.20) and to classify a new point according to
m
h GPC (x) = sign α̂i k (xi , x) , α̂ = G−1 t̂ . (3.22)
i=1
expansion of the weight vector in the mapped training objects, we see that
t̂ = Xŵ = XX α̂ = Gα̂ , ⇔ α̂ = G−1 t̂ .
2. By the same argument we know that the term t G−1 t equals α Gα = w2
(assuming that w = X α exists in the linear span of the mapped training inputs).
Now, if we consider an inverse loss likelihood PY|T=t for the loss l : Ê × → Ê
the maximizer t̂ ∈ Ê m , of equation (3.20) must equal the minimizer ŵ ∈ of
m
m
− ln PY|T=ti (yi ) + w2 = lsigmoid (
xi , w , yi ) + w2 , (3.23)
i=1 i=1
where lsigmoid (t, y) = ln 1 + exp 2β −1 · yt − 2β −1 yt. Note that lsigmoid : Ê ×
→ Ê is another approximation of the zero-one loss l0−1 (see Figure 3.5 (left)
and equation (2.9)). In this sense, Gaussian processes for classification are another
6 Basically, a closer look at equation (3.22) and (3.20) shows that, in order to obtain t̂, we need to invert the
Gram matrix G ∈ Êm×m which is then used again to compute α̂. If the Gram matrix is badly conditioned, i.e.,
the ratio between the largest and smallest eigenvector of G is significantly large, then the error in computing α̂ by
(3.22) can be very large although we may have found a good estimate t̂ ∈ Êm . Therefore, the algorithm presented
avoids the “detour” via t̂ but directly optimizes w.r.t. α. The more general difficulty is that inverting a matrix is
an ill-posed problem (see also Appendix A.4).
92 Chapter 3
β=0.5 PY|T=t(−1)+PY|T=t(+1)
3
β=1
loss
loss
2
β=2
1
Iyf(x)≤0
0
In the last section we saw that a direct application of Bayesian ideas to the problem
of regression estimation yields efficient algorithms known as Gaussian processes.
In this section we will carry out the same analysis with a slightly refined prior PW
on linear functions f w in terms of their weight vectors w ∈ ⊆ n2 . As we will
93 Kernel Classifiers from a Bayesian Perspective
weight vectors with a small number of non-zero coefficients. One way to achieve
this is to modify the prior in equation (3.8), giving
PW = Normal (0, ) ,
n
where = diag (θ) and θ = (θ1 , . . . , θn ) ∈ Ê + is assumed known. The idea
behind this prior is similar to the idea of automatic relevance determination given
in Example 3.12. By considering θi → 0 we see that the only possible value for the
ith component of the weight vector w is 0 and, therefore, even when considering
the Bayesian prediction Bayes z the ith component is set to zero. In order to make
inference we consider the likelihood model given in equation (3.9), that is, we
assume that the target values t = (t1 , . . . , tm ) ∈ Ê m are normally distributed with
mean
xi , w and variance σt2 . Using Theorem A.28 it follows that the posterior
measure over weight vectors w is again Gaussian, i.e.,
As described in the last section, the Bayesian prediction at a new test object x ∈
is given by Bayes z (x) =
x, µ. Since we assumed that many of the θi are
zero, i.e., the effective number n eff = θ0 of features φi : → Ê is small, it
follows that and µ are easy to calculate7 . The interesting question is: Given a
training sample z = (x, t) ∈ ( × Ê ) m , how can we “learn” the sparse vector
θ = (θ1 , . . . , θn ) ?
In the current formulation, the vector θ is a model parameter and thus we shall
employ evidence maximization to find the value θ̂ that is best supported by the
given training data z = (x, t). One of the greatest advantages is that we know the
7 In practice, we delete all features φi : → Ê corresponding to small θ –values and fix the associated µ–values
to zero.
94 Chapter 3
In Appendix B.8 we derive explicit update rules for θ and σt2 which, in case of
convergence, are guaranteed to find a local maximum of the evidence (3.25). The
update rules are given by
µ2i (new) t − Xµ2
θi(new) = , σt2 = , ζi = 1 − θi−1 ii .
ζi m − ni=1 ζi
Interestingly, during application of these update rules, it turns out that many of the
θi decrease quickly toward zero which leads to a high sparsity in the mean weight
vector µ. Note that, whenever θi falls below a pre-specified threshold, we delete the
ith column from X as well as θi itself which reduces the number of features used
by one. This leads to a faster convergence of the algorithm as it progresses because
the necessary inversion of the matrix σt−2 X X + −1 in (3.24) is computationally
less demanding. After termination, all components ŵi of the learned weight vector
ŵ ∈ Ên , for which θi is below the threshold, are set to exactly 0; the remaining
coefficients ŵi are set equal to corresponding values in µ = σt−2 X t.
In order to apply this algorithm (which has so far been developed for the case of
regression estimation only) to our initial problem of classification learning (recall,
are given a sample z = (x, y) ∈ ( × {−1, +1})m of object-class pairs), we use
the idea outlined in the previous subsection. In particular, when computing the
predictive distribution PY|X=x,Zm =z of the class y ∈ {−1, +1} at a new test object
x ∈ , we consider m + 1 latent variables T1 , . . . , Tm , Tm+1 at all the m training
objects x ∈ m and at the test object x ∈ , computed by applying a latent weight
vector W to all the m + 1 mapped objects (x, x) ∈ m+1 . By marginalizing over
all the possible values w ∈ Ê n of W we obtain
PY|X=x,Zm =z (y) = EW|X=x,Zm =z PY|X=x,Zm =z,W=w (y)
&
= PY|X=x,W=w (y) · fW|Zm =z (w) dw .
Ên
Note that PY|X=x,W=w (y) = PY|T=
x,w (y) where PY|T=t is given by equation
(3.16). Similarly to the Gaussian process case, the problem with this integral is that
it cannot be performed analytically because the integrand fW|Zm =z (w) is no longer
95 Kernel Classifiers from a Bayesian Perspective
All the training objects xi ∈ x which have a non-zero coefficient αi are termed
relevance vectors because they appear the most relevant for the correct prediction
96 Chapter 3
a=1e−2, b=1e2
a=1e−3, b=1e3
−4
a=1e−4, b=1e4
−18
marg
log(fWi(w))
inalis
−6
−20
ed p
rior d
−8 −22
ensit
−24
−10
y
−2 −2
−2 −1 0 1 2
w
w −1 −1
fir
of
s
nt
tc
ne
om
0 0
po
po
m
ne
co
1 1
nt
nd
of
co
w
se
2 2
Figure 3.6 (Left) Marginalized log-prior densities fWi over single weight vector compo-
nents wi implicitly considered in relevance vector machines. Relevance vector machines
are recovered for the case of a → 0 and b → ∞ in which the prior is indefinitely peaked
at w = 0. (Right) Surface plot for the special case of n = 2 and b = a −1 = 1 000. Note
that this prior favors one zero weight vector component w1 = 0 much more that two very
small values |w1 | and |w2 | and is sometimes called a sparsity prior.
of the whole training sample.8 The appealing feature when using models of the
form (3.28) is that we still learn a linear classifier (function) in some feature space
. Not only does it allow us to apply all the theoretical results we shall obtain in
Part II of this book but the geometrical picture given in Section 2.1 is also still valid
for this algorithm.
8 Another reason for terming them relevance vectors is that the idea underlying the algorithm is motivated by
automatic relevance determination, introduced in Example 3.12 (personal communication with M. Tipping).
97 Kernel Classifiers from a Bayesian Perspective
The problem with the latter expression is that we cannot analytically compute the
final integral. Although we get a closed form expression for the density fTm |Xm =x,Q=θ
(a Gaussian measure derived in equation (3.25)) we cannot perform the expecta-
tion analytically regardless of the prior distribution
;n chosen. When using
−1aproduct
of Gamma distributions for PQ , i.e., fQ (θ) = i=1 Gamma (a, b) θi , it can
be shown, however, that, in the limit of a → 0 and b → ∞, the mode of the
joint distribution fQTm |Xm =x (θ , t) equals the vector θ̂ and t̂ = Xµ (see equation
(3.24)) as computed by the relevance vector machine algorithm. Hence, the rel-
evance vector machine—which performs evidence maximization over the hyper-
parameters θ ∈ Ê n —can also be viewed as a maximum-a-posteriori estimator
of PWQ|Xm =x,Tm =t because t = Xw.As such it is interesting to investigate the
marginalized prior PW = EQ PW|Q=θ . In Figure 3.6 we have depicted the form of
this marginalized prior for a single component (left) and for the special case of a
two-dimensional feature space (right). It can be seen from these plots that, by the
implicit choice of this prior, the relevance vector machine looks for a mode θ̂ in
a posterior density which has almost all a-priori probability mass on sparse solu-
tions. This somewhat explains why the relevance vector machine algorithm tends
to find very sparse solutions.
The algorithms introduced in the last two sections solve the classification learning
problem by taking a “detour” via the regression estimation problem. For each
training object it is assumed that we have prior knowledge PW about the latent
variables Ti corresponding to the logit transformation of the probability of xi
being from the observed class yi . This is a quite cumbersome assumption as we
are unable to directly express prior knowledge on observed quantities such as the
classes y ∈ m = {−1, +1}m . In this section we are going to consider an algorithm
which results from a direct modeling of the classes.
Let us start by defining the prior PW . In the classification case we note that, for
any λ > 0, the weight vectors w and λw perform the same classification because
sign (
x, w) = sign (
x, λw). As a consequence we consider only weight vectors
of unit length, i.e., w ∈ Ï Ï
, Ã
= {w ∈ | w = 1 } (see also Section 2.1).
In the absence of any prior knowledge we assume a uniform prior measure PW
Ï
over the unit hypersphere . An argument in favor of the uniform prior is that the
belief in the weight vector w should be equal to the belief in the weight vector −w
98 Chapter 3
under the assumption of equal class probabilities PY (−1) and PY (+1). Since the
classification y−w = (sign (
x1 , −w) , . . . , sign (
xm , −w)) of the weight vector
−w at the training sample z ∈ m equals the negated classification −yw =
− (sign (
x1 , w) , . . . , sign (
xm , w)) of w it follows that the assumption of equal
belief in w and −w corresponds to assuming that PY (−1) = PY (+1) = 12 .
In order to derive an appropriate likelihood model, let us assume that there is no
noise on the classifications, that is, we shall use the PAC-likelihood lPAC as given
in Definition 3.3. Note that such a likelihood model corresponds to using the zero-
one loss l0−1 in the machine learning scenario (see equations (2.10) and (3.2)).
According to Bayes’ theorem it follows that the posterior belief in weight vectors
(and therefore in classifiers) is given by
The set V (z) ⊆ is called version space and is the set of all weight vectors that
parameterize classifiers which classify all the training objects correctly (see also
Definition 2.12). Due to the PAC-likelihood, any weight vector w which does not
have this property is “cut-off” resulting in a uniform posterior measure PW|Zm =z
over version space. Given a new test object x ∈ we can compute the predictive
distribution PY|X=x,Zm =z of the class y at x ∈ by
The Bayes classification strategy based on PY|X=x,Zm =z decides on the class with
the larger probability. An appealing feature of the two class case = {−1, +1} is
that this decision can also be written as
Bayes z (x) = sign EW|Zm =z sign (
x, W) , (3.30)
that is, the Bayes classification strategy effectively performs majority voting in-
volving all version space classifiers. The difficulty with the latter expression is that
we cannot analytically compute the expectation as this requires efficient integra-
tion of a convex body on a hypersphere (see also Figure 2.1 and 2.8). Hence, we
approximate the Bayes classification strategy by a single classifier.
99 Kernel Classifiers from a Bayesian Perspective
Definition 3.15 (Bayes point) Given a training sample z and a posterior measure
Ï
PW|Zm =z over the unit hypersphere , the Bayes point wbp ∈ Ï
is defined
wbp = argmin EX l0−1 (Bayes z (X) , sign (
φ (X) , w)) ,
w∈ Ï
that is, the Bayes point is the optimal projection of the Bayes classification strategy
to a single classifier wbp w.r.t. generalization error.
Although the Bayes point is easily defined its computation is much more difficult
because it requires complete knowledge of the input distribution PX . Moreover,
it requires a minimisation process w.r.t. the Bayes classification strategy which
involves the posterior measure PW|Zm =z —a computationally difficult task. A closer
look at equation (3.30), however, shows that a another reasonable approximation
to the Bayes classification strategy is given by exchanging sign (·) and expectation,
i.e.,
%$
h cm (x) = sign sign EW|Zm =z
x, W = sign x, EW|Zm =z W .
wcm
The idea behind this “trick” is that, if the version space V (z) is almost point-
symmetric w.r.t. wcm then, for each weight vector w ∈ V (z) in version space,
there exists another weight vector w̃ = 2wcm − w ∈ V (z) also in version space
and, thus,
.
2 · sign (
x, wcm ) if |
x, w| < |
x, wcm |
sign (
x, w) + sign x, w̃ = ,
0 otherwise
that is, the Bayes classification of a new test object equals the classification carried
out be the single weight vector wcm . The advantage of the classifier wcm —which
is also the center of mass of version space V (z)—is that it can be computed or es-
timated without any extra knowledge about the data distribution. Since the center
of mass is another approximation to the Bayes classification we call every algo-
rithm that computes wcm a Bayes point algorithm, although the formal definition
of the Bayes point approximation is slightly different. In the following subsection
we present one possible algorithm for estimating the center of mass.
100 Chapter 3
The main idea in computing the center of mass of version space is to replace the
analytical integral by a sum over randomly drawn classifiers, i.e.,
1 K
wcm = EW|Zm =z W ≈ wi wi ∼ PW|Zm =z .
K i=1
Such methods are known as Monte-Carlo methods and have proven to be suc-
cessful in practice. A difficulty we encounter with this approach is in obtaining
samples wi drawn according to the distribution PW|Zm =z . Recalling that PW|Zm =z is
uniform in a convex polyhedra on the surface of hypersphere in feature space we
see that it is quite difficult to directly sample from it. A commonly used approach
to this problem is to approximate the sampling distribution PW|Zm =z by a Markov
chain. A Markov chain is fully specified by a probability distribution PW1 W2 where
fW1 W2 ((w1 , w2 )) is the “transition” probability for progressing from a randomly
drawn weight vector w1 to another weight vector w2 . Sampling from the Markov
chain involves iteratively drawing a new weight vector wi+1 by sampling from
PW2 |W1 =wi . The Markov chain is called ergodic w.r.t. PW|Zm =z if the limiting distri-
bution of this sampling process is PW|Zm =z regardless of our choice of w0 . Then, it
suffices to start with a random weight vector w0 ∈ Ï
and at each step, to obtain
a new sample wi ∈ Ïdrawn according to PW2 |W1 =wi−1 . The combination of these
two techniques has become known as the Markov-Chain-Monte-Carlo
(MCMC)
method for estimating the expectation EW|Zm =z W .
We now outline an MCMC algorithm for approximating the Bayes point by the
center of mass of version space V (z) (the whole pseudo code is given on page
330). Since it is difficult to generate weight vectors that parameterize classifiers
consistent with the whole training sample z ∈ m we average over the trajectory
of a ball which is placed inside version space and bounced like a billiard ball. As
a consequence we call this MCMC method the kernel billiard. We express each
position b ∈ of the ball and each estimate wi ∈ of the center of mass of
V (z) as a linear combination of the mapped training objects, i.e.,
m
m
w= αi xi , b= γi xi , α ∈ Êm , γ ∈ Êm .
i=1 i=1
101 Kernel Classifiers from a Bayesian Perspective
Figure 3.7 (Left) 5 samples b1 , . . . , b5 (white dots) obtained by playing billiards on
the sphere in the special case of Ï ⊆ Ê3 . In the update step, only the chord length (gray
lines) are taken into consideration. (Right) Schematic view of the kernel billiard algorithm.
Starting at w0 ∈ V (z) a trajectory of billiard bounces b1 , . . . , b5 , . . . is computed and then
averaged over so as to obtain an estimate ŵcm of the center of mass of version space.
Without loss of generality we can make the following assumption about the needed
direction vector v
m
v= βi xi , β ∈ Êm .
i=1
After computing all m flight times, we look for the smallest positive one,
c = argmin τ j .
j :τ j >0
where ξi = bi − bi+1 is the length of the trajectory in the ith step and i =
i
j =1 ξ j is the accumulated length up to the ith step. Note that the operation ⊕µ
is only an approximation to the addition operation we sought because an exact
weighting would require arc lengths rather than chord lengths.
As a stopping criterion we compute an upper bound on ρ2 , the weighting factor of
the new part of the trajectory. If this value falls below a prespecified threshold we
stop the algorithm. Note that an increase in i will always lead to termination.
In this last section we are going to consider one of the earliest approaches to the
problem of classification learning. The idea underlying this approach is slightly
different from the ideas outlined so far. Rather than using the decomposition
PXY = PY|X PX we now decompose the unknown probability measure PXY = PZ
constituting the learning problem as PXY = PX|Y PY . The essential difference
between these two formal expressions becomes apparent when considering the
model choices:
1. In the case of PXY = PY|X PX we use hypotheses h ∈
⊆ to model the
conditional measure PY|X of classes y ∈
given objects x ∈ and marginal-
ize over PX . In the noise-free case, each hypothesis defines such a model by
PY|X=x,H=h (y) = Ih(x)=y . Since our model for learning contains only predictors
h :
→ that discriminate between objects, this approach is sometimes called
the predictive or discriminative approach.
2. In the case of PXY = PX|Y PY we model the generation of objects x ∈ given
the class y ∈ = {−1, +1} by some assumed probability model PX|Y=y,Q=θ
where θ = (θ +1 , θ −1 , p) ∈ parameterizes this generation process. We have
the additional parameter p ∈ [0, 1] to describe the probability PY|Q=θ (y) by
p · I y=+1 + (1 − p) · I y=−1 . As the model contains probability measures from
which the generated training sample x ∈ is sampled, this approach is sometimes
called the generative or sampling approach.
In order to classify a new test object x ∈ with a model θ ∈ in the generative
approach we make use of Bayes’ theorem, i.e.,
PX|Y=y,Q=θ (x) PY|Q=θ (y)
PY|X=x,Q=θ (y) = .
ỹ∈ PX|Y= ỹ,Q=θ (x) PY|Q=θ ( ỹ)
104 Chapter 3
In the case of two classes = {−1, +1} and the zero-one loss, as given in equation
(2.10), we obtain for the Bayes optimal classification at a novel test object x ∈ ,
h θ (x) = argmax PY|X=x (y)
y∈{−1,+1}
PX|Y=+1,Q=θ (x) · p
= sign ln , (3.34)
PX|Y=−1,Q=θ (x) · (1 − p)
as the fraction in this expression is greater than one if, and only, if PXY|Q=θ ((x, +1))
is greater than PXY|Q=θ ((x, −1)). In the generative approach the task of learning
amounts to finding the parameters θ ∗ ∈ or measures PX|Y=y,Q=θ ∗ and PY|Q=θ ∗
which incur the smallest expected risk R [h θ ∗ ] by virtue of equation (3.34). Again,
we are faced with the problem that, without restrictions on the measure PX|Y=y ,
the best model is the empirical measure vx y (x), where x y ⊆ x is the sample of
all training objects of class y. Obviously, this is a bad model because vx y (x) as-
signs zero probability to all test objects not present in the training sample and thus
h θ (x) = 0, i.e., we are unable to make predictions on unseen objects. Similarly to
the choice of the hypothesis space in the discriminative model we must constrain
the possible generative models PX|Y=y .
Let us consider the class of probability measures from the exponential family
fX|Y=y,Q=θ (x) = a0 θ y τ0 (x) exp θ y (τ (x)) ,
for some fixed function a0 : y → Ê, τ0 : → Ê and τ : → . Using this
functional form of the density we see that each decision function h θ must be of the
following form
a0 (θ +1 ) τ0 (x) exp θ +1 (τ (x)) · p
h θ (x) = sign ln
a0 (θ −1 ) τ0 (x) exp θ −1 (τ (x)) (1 − p)
a0 (θ +1 ) · p
= sign (θ − θ
+1
−1 ) (τ (x)) + ln (3.35)
a0 (θ −1 ) (1 − p)
w
b
= sign (
w, τ (x) + b) .
This result is very interesting as it shows that, for a rather large class of generative
models, the final classification function is a linear function in the model parameters
θ = (θ −1 , θ +1 , p). Now, consider the special case that the distribution PX|Y=y,Q=θ
of objects x ∈ given classes y ∈ {−1, +1} is a multidimensional Gaussian in
105 Kernel Classifiers from a Bayesian Perspective
Ü Ê¾ Û Ü
Ü
·
½
Û
·½
¾
Û
¾ ½
Û
½
Û
Û
Ü
Ê ·½ ½
Û
some feature space ⊆ n2 mapped into by some given feature map φ : → ,
− n2 − 12 1 −1
fX|Y=y,Q=θ (x) = (2π ) || exp − x − µµ x − µy , (3.36)
2
where the parameters θ y are the mean vector µ y ∈ Ên and the covariance matrix
y ∈ Ê n×n , respectively. Making the additional assumptions that the covariance
matrix is the same for both models θ +1 and θ −1 and p = PY|Q=θ (+1) =
PY|Q=θ (−1) = 12 we see that, according to equations (A.16)–(A.17) and (3.35),
1
τ (x) = x , w = −1 µ+1 − µ−1 , b = µ−1 −1 µ−1 − µ+1 −1 µ+1 . (3.37)
2
This results also follows from substituting (3.36) directly into equation (3.34) (see
Figure 3.8 (left)).
106 Chapter 3
training sample z we update our prior belief PQ , giving a posterior belief PQ|Zm =z .
Since we need one particular parameter value we compute the MAP estimate θ̂,
that is, we choose the value of θ which attains the maximum a-posteriori belief
PQ|Zm =z (see also Definition 3.6). If we choose a (improper) uniform prior PQ then
the parameter θ̂ equals the parameter vector which maximizes the likelihood and
is therefore also known as the maximum likelihood estimator. In Appendix B.10 it
is shown that these estimates are given by
1 ˆ = 1
µ̂ y = xi , xi − µ̂ y xi − µ̂ y (3.39)
m y (xi ,y)∈z m y∈{−1,+1} (x i ,y)∈z
1
= X X − m y µ̂ y µ̂ y ,
m y∈{−1,+1}
1
= GG − m y k y ky + λI ,
m y∈{−1,+1}
where the m × m matrix G with Gi j = xi , x j = k xi , x j is the Gram matrix.
Using k y and S in place of µ y and in the equations (3.37) results in the so-
called kernel Fisher discriminant. Note that the m–dimensional vector computed
corresponds to the linear expansion coefficients α̂ ∈ Êm of a weight vector wKFD in
feature space because the classification of a novel test object x ∈ by the kernel
Fisher discriminant is carried out on the projected data point Xx, i.e
! " m
h (x) = sign α̂, Xx + b̂ = sign α̂i k (xi , x) + b̂ ,
i=1
1 −1
α̂ = S−1 (k+1 − k−1 ) , b̂ = k−1 S k−1 − k+1 S−1 k+1 . (3.40)
2
It is worth mentioning that we would have obtained the same solution by exploiting
the fact that the objective function (3.38) depends only on inner products between
mapped training objects xi and the unknown weight vector w. By virtue of Theorem
m
2.29 the solution wFD can be written as wFD = i=1 α̂i xi which, inserted into
(3.38), yields a function in α whose maximizer is given by equation (3.40). The
pseudocode of this algorithm is given on page 329.
In order to reveal the relation between this algorithm and the Fisher linear
discriminant we assume that X ∈ Êm×( n+1) is a new data matrix constructed
from X by adding a column of ones, i.e., X = (X, 1). Our new weight vector
w̃ = (w; b) ∈ Ên+1 already contains the offset b. By choosing
t = m · (y1 /m y1 , . . . , ym /m ym ) ,
109 Kernel Classifiers from a Bayesian Perspective
where m +1 and m −1 are the number of positively and negatively labeled examples
in the training sample, we see that the maximum condition X<
X w= X t can also
be written
X ŵ X X X X 1 ŵ Xt
X 1 = t ⇔ = .
1 b̂ 1 1 X 1 1 b̂ 1 t
! "
By construction 1 t = m m +1
m +1
− m −1
m −1
= 0 and, thus, the last equation gives
1
1 Xŵ + b̂ · 1 1 = 0 , ⇔ b̂ = − 1 Xŵ . (3.41)
m
Inserting this expression into the first equation and noticing that by virtue of
equation (3.39)
X t = m · µ̂+1 − µ̂+1 ,
we see that
1
X Xŵ + X 1 · b̂ = X X − X 11 X ŵ = m · µ̂+1 − µ̂+1 . (3.42)
m
A straightforward calculation shows that
1 m +1 m −1
X 11 X = m +1 µ̂+1 µ̂+1 + m −1 µ̂−1 µ̂−1 − µ̂+1 − µ̂−1 µ̂+1 − µ̂−1 .
m m
Combining this expression with equation (3.42) results in
! "
ˆ + m +1 m −1 µ̂+1 − µ̂−1 µ̂+1 − µ̂−1 ŵ = m · µ̂+1 − µ̂+1 .
m
ˆ given in equation (3.39). Finally, noticing that
where we used the definition of
m +1 m −1
µ̂+1 − µ̂−1 µ̂+1 − µ̂−1 w = (1 − c) µ̂+1 − µ̂−1
m
for some c ∈ Ê the latter expression implies that
−1
ˆ
ŵ = m · c · µ̂+1 − µ̂−1 ,
that is, up to a scaling factor (which is immaterial in classification) the weight
vector ŵ ∈ obtained by least square regression on t ∝ y equals the Fisher
discriminant. The value of the threshold b̂ is given by equation (3.41).
110 Chapter 3
In the first section of this chapter we introduced the Bayesian inference principle
whose basis is given by Bayes’ theorem (see equation (3.1)). Excellent monographs
introducing this principle in more detail are by Bernardo and Smith (1994) and by
Robert (1994); for a more applied treatment of ideas to the problem of learning
see MacKay (1991) and MacKay (1999). It was mentioned that the philosophy
underlying Bayesian inference is based on the notion of belief. The link between
belief and probability is established in the seminal paper Cox (1946) where a min-
imal number of axioms regarding belief are given. Broadly speaking, these axioms
formalize rational behavior on the basis of belief. A major concept in Bayesian
analysis is the concept of prior belief. In the book we have only introduced the idea
of conjugate priors. As the prior is the crux of Bayesian inference there exist, of
course, many different approaches to defining a prior, for example on the basis of
invariances w.r.t. parameterization of the likelihood (Jeffreys 1946; Jaynes 1968).
In the context of learning, the model selection principle of evidence maximization
was formulated for the first time in MacKay (1992). In Subsection 3.1.1 we intro-
duced several prediction strategies on the basis of posterior belief in hypotheses.
Note that the term Bayes classification strategy (see Definition 3.7) should not be
confused with the term Bayes (optimal) classifier which is used to denote the strat-
egy which decides on the class y that incurs minimal loss on the prediction of x
(see Devroye et al. (1996)). The latter strategy is based on complete knowledge
of the data distribution PZ and therefore achieves minimal error (sometimes also
called Bayes error) for a particular learning problem.
Section 3.2 introduced Bayesian linear regression (see Box and Tiao (1973))
and revealed its relation to certain stochastic processes known as Gaussian pro-
cesses (Feller 1966); the presentation closely follows MacKay (1998, Williams
(1998). In order to relate this algorithm to neural networks (see Bishop (1995))
it was shown in Neal (1996) that a Gaussian process on the targets emerges in
the limiting case of an infinite number of hidden neurons and Gaussian priors on
the individual weights. The extension to classification using the Laplace approx-
imation was done for the first time in Barber and Williams (1997, Williams and
Barber (1998). It was noted that there also exists a Markov chain approximation
(see Neal (1997b)) and an approximation known as the mean field approximation
(see Opper and Winther (2000)). It should be noted that Gaussian processes for
regression estimation are far from new; historical details dating back to 1880 can
be found in Lauritzen (1981). Within the geostatistics field, Matheron proposed a
111 Kernel Classifiers from a Bayesian Perspective
has been noticed in several places, e.g., Vapnik (1982, p. 48). The idea of ker-
nelizing this algorithm has been considered by several researchers independently
yet at the same time (see Baudat and Anouar (2000), Mika et al. (1999) and Roth
and Steinhage (2000)). Finally, the equivalence of Fisher discriminants and least
squares regression, demonstrated in Remark 3.16, can also be found in Duda et al.
(2001).
It is worth mentioning that, beside the four algorithms presented, an interesting
and conceptually different learning approach has been put forward in Jaakkola et al.
(2000) and Jebara and Jaakkola (2000). The algorithm presented there employs
the principle of maximum entropy (see Levin and Tribus (1978)). Rather than
specifying a prior distribution over hypotheses together with a likelihood model
PZ|H=h for the objects and classes, given a hypothesis h, which, by Bayes’ theorem,
result in the Bayesian posterior, we consider any measure PH which satisfies certain
constraints on the given training sample z as a potential candidate for the posterior
belief.
The principle
then chooses the measure PME H which maximizes the entropy
EH ln (PH (H)) . The idea behind this principle is to use as little prior knowledge
or information as possible in the construction of PME H . Implementing this formal
principle for the special case of linear classifiers results in an algorithm very
similar to the support vector algorithm (see Section 2.4). The essential difference
is given by the choice of the cost function on the margin slack variables. A similar
observation has already been made in Remark 3.13.
II Learning Theory
4 Mathematical Models of Learning
In order to see that this function has minimal expected risk we note that
def
Rθ [h] = EXY|Q=θ l (h (X) , Y) = EX|Q=θ EY|X=x,Q=θ l (h (x) , Y) , (4.2)
where h θ minimizes the expression in the innermost brackets. For the case of zero-
one loss l0−1 (h (x) , y) = Ih(x)= y also defined in equation (2.10), the function h θ
reduces to
h θ (x) = argmin 1 − PY|X=x,Q=θ (y) = argmax PY|X=x,Q=θ (y) ,
y∈ y∈
The term generative refers to the fact that the model contains different descrip-
tions of the generation of the training sample z (in terms of a probability measure).
Similarly, the term discriminative refers to the fact that the model consists of dif-
ferent descriptions of the discrimination of the sample z. We already know that a
machine learning method selects one hypothesis (z) ∈ given a training sample
z ∈ m . The corresponding selection mechanism of a probability measure PZ|Q=θ
given the training sample z is called an estimator.
by θ ∈ then θ̂ z ∈ is defined by
θ̂ z = θ ⇔ (z) = PZ|Q=θ ,
that is, θ̂ z returns the parameters of the measure estimated using .
If we view a given hypothesis space as the set of parameters h for the conditional
distribution PY|X=x,H=h then we see that each learning algorithm : ∪∞ m=1 →
m
risk infh∈ R [h] = R [h ∗ ]. A theoretical result in this framework has the form
PZm R (Z) − R h ∗ > ε < δ (ε, m) , (4.4)
where the expression in the parenthesis is also known as the generalization error
(see also Definition 2.10). In case R [h ∗ ] = 0 the generalization error equals the
expected risk. Note that each hypothesis h ∈ is reduced to a scalar R [h] so that
the question of an appropriate metric ρ is meaningless2 . Since PZ is assumed to be
unknown, the above inequality has to hold for all probability measures PZ . This
is often referred to as the worst case property of the machine learning framework.
The price we have to pay for this generality is that our choice of the predictive
model might be totally wrong (e.g., R [h ∗ ] = 0.5 in the case of zero-one loss
l0−1 ) so that learning (z) ∈ is useless.
For the task of learning—where finding the best discriminative description of the
data is assumed to be the ultimate goal—the convergence (4.4) of risks appears the
most appropriate. We note, however, that this convergence is a special case of the
convergence
(4.3) of probability measures when identifying and and using
ρ PZ|H=h , PZ|H=h ∗ = R [h] − R [h ∗ ]. The interesting question is:
If this were the case than there would be no need to study the convergence of
risk but we could use the plethora of results known from statistics about the
convergence of probability measures. If, on the other hand, this is not the case then
it also follows that (in general) the common practice of interpreting the parameters
w (or θ ) of the hypothesis learned is theoretically not justified on the basis of
convergence results of the form (4.4). Let us consider the following example.
2 All norms on the real line Ê1 are equivalent (see Barner and Flohr (1989, p. 15)).
3 This example is taken from Devroye et al. (1996, p. 267).
119 Mathematical Models of Learning
Ý Ü
Ü
Figure 4.1 True densities fX|Y=y underlying the data in Example 4.2. The uniform
densities (solid lines) on [0, 1] and [−2, 0] apply for Y = 1 and Y = 2, respectively.
Although with probability one the parameter θ1∗ = 1 will be estimated to arbitrary
precision, the probability that a sample point falls at exactly x = 1 is zero, whence
(θ̂ Z )1 = 1. Since the model is noncontinuous in its parameters θ , for almost all training
samples the estimated densities are uniform on [−(θ̂ Z )2 , 0] and [−(θ̂ Z )1 , 0] (dashed lines).
Thus, for all x > 0 the prediction based on θ̂ Z is wrong.
! "
θ̂ z = max(x,i)∈z |x| for i ∈ {1, 2} because
i
! "
∀ε > 0 : lim PZm θ̂ Z − θ ∗ > ε = 0 ,
m→∞ 2
This simple example shows that the convergence of probability measures is not
necessarily a guarantee of convergence of associated risks. It should be noted, how-
ever, that this example used the noncontinuity of the parameterization θ of the prob-
ability measure PZ|Q=θ as well as one specific metric ρ on probability
0 1 measures.
The following example shows that along with the difference R h θ̂ z − R [h θ ∗ ] in
expected risks there exists another “natural” metric on probability measures which
leads to a convergence of risks.
120 Chapter 4
Let us assume that our generative model only consists of measures PZ|Q=θ that
possess a density fZ|Q=θ over the σ –algebra n of Borel sets in Ên . The theorem
of Scheffé states that
def
ρ PZ|Q=θ , PZ|Q=θ ∗ = fZ|Q=θ − fZ|Q=θ ∗ 1 = 2 sup
PZ|Q=θ (A) − PZ|Q=θ ∗ ( A)
.
A∈ n
Utilizing equation (4.5) and the fact that each measure PZ|Q=θ defines a Bayes
optimal classifier h θ by equation (4.1) we conclude
fZ|Q=θ − fZ|Q=θ ∗ = 2 sup
PZ|Q=θ ( A) − PZ|Q=θ ∗ ( A)
A∈n
1
≥ 2 sup
Rθ h θ̃ − Rθ ∗ h θ̃
θ̃∈
≥ |Rθ [h θ ] − Rθ ∗ [h θ ]| + |Rθ [h θ ∗ ] − Rθ ∗ [h θ ∗ ]|
= |Rθ ∗ [h θ ] − Rθ [h θ ]| + |Rθ [h θ ∗ ] − Rθ ∗ [h θ ∗ ]|
≥ |Rθ ∗ [h θ ] − Rθ [h θ ] + Rθ [h θ ∗ ] − Rθ ∗ [h θ ∗ ]|
=
Rθ ∗ [h θ ] − Rθ ∗ [h θ ∗ ] + Rθ [h θ ∗ ] − Rθ [h θ ]
≥0 ≥0
≥ R [h θ ] − R [h ]
θ∗ θ∗ θ∗
= R [h θ ] − R [h θ ∗ ] ,
where we use the triangle inequality in the fifth line and assume PZ = PZ|Q=θ ∗
in the last line. Thus we see that the convergence of the densities in L 1 implies
the convergence (4.4) of the expected
risks for the associated decision functions
because each upper bound on fZ|Q=θ − fZ|Q=θ ∗ 1 is also an upper bound on
R [h θ ] − R [h θ ∗ ].
Note, however, that the convergence in expected risks could be much faster and
thus we lose some tightness of the potential results when studying the convergence
of probability measures.
The main problem in the last two examples is summarized in the following
statement made in Vapnik (1995): When solving a given problem one should avoid
solving a more general problem as an intermediate step. In our particular case this
means that if we are interested in the convergence of the expected risks we should
not resort to the convergence of probability measures because the latter might not
imply the former or might be a weaker convergence than required. Those who first
estimate PZ by (z) ∈ and then construct rules based on the loss l do themselves
a disservice.
As a starting point let us consider the huge class of empirical risk minimization
algorithms ERM formally defined in equation (2.12). To obtain upper bounds on
the deviation between the expected risk of the function ERM (z) (which minimizes
the training error Remp [h, z]) and the best function h ∗ = arginf h∈ R [h], the
general idea is to make use of the following relation
Remp h ∗ , z ≥ Remp [ERM (z) , z] ⇔ Remp h ∗ , z − Remp [ERM (z) , z] ≥ 0 ,
R [ERM (z)] − R h ∗ ≤ R [h z ] − R h ∗ + Remp h ∗ , z − Remp [h z , z]
≥0
=
R [h z ] − Remp [h z , z] + Remp h ∗ , z − R h ∗
≤
R [h z ] − Remp [h z , z]
+
R h ∗ − Remp h ∗ , z
≤ 2 sup
R [h] − Remp [h, z]
, (4.6)
h∈
where we have made use of the triangle inequality in the third line and bounded
the uncertainty
about ERM (z) ∈
and h ∗ ∈ by the worst case assump-
tion of suph∈
R [h] − Remp [h, z]
from above. We see that, rather than study-
ing the generalization error of an empirical risk minimization algorithm directly,
it suffices to consider the uniform convergence of training errors to expected er-
rors over all hypotheses h ∈ contained in the hypothesis space because
122 Chapter 4
any upper bound on the deviation suph∈
R [h] − Remp [h, z]
is also an upper
bound on the generalization error R [ERM (z)] − R [h ∗ ] by virtue of equation
(4.6). The framework which studies this convergence is called the VC (Vapnik-
Chervonenkis) or PAC (Probably Approximately Correct) framework due to their
different origins (see Section 4.5 for a detailed discussion about their origins
and connections). Broadly speaking, the difference between the PAC framework
and the VC framework is that the former considers only data distributions PZ
where PY|X=x (y) = Ih ∗ (x)=y , for some h ∗ ∈ , which immediately implies that
R [h ∗ ] = 0 and Remp [ERM (z) , z] = 0. Thus, it follows that
R [ERM (z)] − R h ∗ = R [ERM (z)] ≤ sup R [h] , (4.7)
{h∈ | Remp [h]=0 }
because ERM (z) ∈ h ∈
Remp [h, z] = 0 ⊆ .
Definition 4.4 (VC and PAC generalization error bounds) Suppose we are given
a hypothesis space ⊆ and a loss function l : × → Ê. Then the function
εVC : Æ × (0, 1] → Ê is called a VC generalization error bound if, and only if, for
all training sample sizes m ∈ Æ , all δ ∈ (0, 1] and all PZ
PZm ∀h ∈ :
R [h] − Remp h, Z
≤ εVC (m, δ) ≥ 1 − δ .
which inevitably shows that i all we are concerned with is the uniform conver-
h over the fixed set =
i
gence
i of frequencies
v z Z to probabilities P Z Z
h
Zh ⊆ | h ∈ of events. Note, however, that up to this point we have only
shown that the uniform convergence of frequencies to probabilities provides a suf-
ficient condition for the convergence of the generalization error of an empirical
risk minimization algorithm. If we restrict ourselves to “non trivial” hypothesis
spaces and the one-sided uniform convergence, it can be shown that this is also a
necessary condition.
In the following three subsections we will only be concerned with the zero-one
loss l0−1 given by equation (2.10). It should be noted that the results we will obtain
can readily be generalized to loss function taking only a finite number values; the
generalization to the case of real-valued loss functions conceptually similar but will
not be discussed in this book (see Section 4.5 for further references).
The general idea is to bound the probability of “bad training samples”, i.e.,
training samples z ∈ m for which there exists a hypothesis h ∈ where the
deviation between the empirical risk Remp [h, z] and the expected risk R [h] is
larger than some prespecified ε ∈ [0, 1]. Setting the probability of this to δ and
solving for ε gives the required generalization error bound. If we are only given a
finite number | | of hypotheses h then such a bound is very easily obtained by a
combination of Hoeffding’s inequality and the union bound.
Theorem 4.6 (VC bound for finite hypothesis spaces) Suppose we are given a
hypothesis space
having a finite number of hypotheses, i.e., | | < ∞. Then,
for any measure PZ , for all δ ∈ (0, 1] and all training sample sizes m ∈ , with
probability at least 1 − δ over the random draw of the training sample z ∈ m we
have
PZm ∃h ∈ :
R [h] − Remp h, Z
> ε < 2 · | | · exp −2mε 2 . (4.8)
Proof Let = h 1 , . . . , h | | . By an application
of the union
bound given in
m
Theorem A.107 we know that PZ ∃h ∈ : R [h] − Remp h, Z
> ε is given
by
| |
=
| |
PZ m
R [h i ] − Remp h i , Z > ε ≤ PZm
R [h i ] − Remp h i , Z
> ε .
i=1 i=1
124 Chapter 4
Since, for any fixed h, R [h] and Remp [h, z] are the expectation and mean of a
random variable between 0 and 1, the result follows by Hoeffding’s inequality.
In order to generalize this proof to an infinite number | | of hypotheses we
use a very similar technique which, however, requires some preparatory work to
reduce the analysis to a finite number of hypotheses. Basically, the approach can
be decomposed into three steps:
1. First, consider a double sample z z̃ ∈ 2m drawn iid where z̃ is sometimes re-
ferred to as a ghost sample. We upper bound the probability that there exists a
hypothesis h ∈ such that Remp [h, z] is more than ε apart from R [h] (see equa-
tion (4.7)) by twice the probability that
there exists h
∈
such that R emp h , z is
more than ε/2 apart from Remp h , z̃ . This lemma has become known as the basic
lemma and the technique is often referred to as symmetrization by a ghost sample.
The idea is intuitive—it takes into account that it is very likely that the mean of a
random variable is close to its expectation (see Subsection A.5.2). If it is likely that
two means estimated on iid samples z ∈ m and z̃ ∈ m are very close then it ap-
pears very probable that a single random mean is close to its expectation otherwise
we would likely have observed a large deviation between the two means.
2. Since we assume the sample (and ghost sample) to be an iid sample it holds
that, for any permutation π : {1, . . . , 2m} → {1, . . . , 2m},
PZ2m (ϒ (Z1 , . . . , Z2m )) = PZ2m ϒ Zπ(1) , . . . , Zπ(2m) ,
whatever the logical formula ϒ : 2m → {true, false} stands for. As a conse-
quence, for any set 2m of permutations it follows that
1
PZ2m (ϒ (Z1 , . . . , Z2m )) = PZ2m ϒ Zπ(1) , . . . , Zπ(2m) (4.9)
|2m | π∈
2m
&
1
= Iϒ (zπ (1) ,...,zπ (2m) ) dFZ2m (z)
2m |2m | π∈2m
1
≤ max I . (4.10)
z∈ 2m |2m | π∈ ϒ (zπ (1) ,...,zπ (2m) )
2m
The appealing feature of this step is that we have reduced the problem of bounding
the probability over 2m to a counting of permutations π ∈ 2m for a fixed z ∈
2m
. This step is also known as symmetrization by permutation or conditioning.
125 Mathematical Models of Learning
3. It remains to bound the number of permutations π ∈ 2m such that there exists
a hypothesis h ∈ on which the deviation of two empirical risks (on the training
sample z and the ghost sample z̃) exceeds ε/2. Since we considered the zero-
2m
one loss l0−1 we know that
there are at most
2 different hypotheses w.r.t. the
empirical risks Remp h , z and Remp h , z̃ . It we denote the maximum number
of such equivalence classes by (2m) then we can again use a combination of
the union bound and Hoeffding’s inequality to bound the generalization error. Note
that the cardinality | | of the hypothesis space in the finite case has been replaced
by the number (2m).
Following these three steps we obtain the main VC and PAC bounds.
Theorem 4.7 (VC and PAC generalization error bound) For all probability mea-
sures PZ , any hypothesis space , the zero-one loss l0−1 given by equation (2.10)
and all ε > 0
2
PZm ∃h ∈ :
R [h] − Remp h, Z
> ε < 4 (2m) exp −
mε
, (4.11)
8
! mε "
PZm (∃h ∈ V (Z) : R [h] > ε) < 2 (2m) exp − , (4.12)
4
mε 2
PZm R ERM (Z) − R h ∗ > ε < 4 (2m) exp − . (4.13)
32
Proof The first two results are proven in Appendix C.1. The final result follows
from equation (4.6) using the fact that
∗
2 sup R [h] − Remp [h, z] ≤ ε ⇒ R [ERM (z)] − R h ≤ ε ⇔
h∈
∗
ε
R [ERM (z)] − R h > ε ⇒ sup R [h] − Remp [h, z] >
,
h∈ 2
which proves the assertion.
Confidence Intervals
Disregarding the fact that is unknown up to this point we see that, from these
assertions, we can construct confidence intervals for the expected risk R [h] of the
function h by setting the r.h.s. of equations (4.11) and (4.12) to δ. Assuming that the
event (violation of the bound) has taken place (which will happen with probability
126 Chapter 4
not more than δ over the random draw of training sample z) then with probability
at least 1 − δ over the random draw of the training sample z for all probability
measures PZ , and simultaneously for all functions h ∈
-
+ ln ( (2m)) .
8 4
R [h] ≤ Remp [h, z] + ln (4.14)
m δ
εVC (m,δ)
Also, for all functions having zero training error Remp [h, z] = 0
+ ln ( (2m)) .
4 2
R [h] ≤ ln (4.15)
m δ
εPAC (m,δ)
These two bounds constitute the basis results obtained in the VC and PAC frame-
work. There are some interesting conclusions we can draw:
Remark 4.8 (Race for constants) The proof of Theorem 4.7 does not provide the
best constants possible. The best constants that can be achieved are 2 as a coef-
ficient of and 1 in the exponent of the exponential term, respectively. We shall see
in Subsection 4.3 that an improvement of these results by orders of magnitude can
only be achieved if we give up the a-priori character of the bounds. Presently, the
bounds are of the same value for all decision functions that achieve the same train-
ing error Remp [h, z]. On the one hand, this characteristic is advantageous as it
gives us a general warranty however malicious the distribution PZ is. On the other
hand, it only justifies the empirical risk minimization method as this is the only data
dependent term entering the bound.
In the previous subsection we used the function which characterizes the worst
case diversity of the hypothesis space as a function of the training sample size.
Moreover, due to the exponential term for the deviation of two means, all that
matters for bounds on the generalization error is the logarithm of this function.
More formally, this function is defined as follows.
(m) def
= max |{(l0−1 (h (x1 ) , y1 ) , · · · , l0−1 (h (xm ) , ym )) | h ∈ }| ,
z∈ m
(4.16)
that is, the maximum number of different equivalence classes of functions w.r.t. the
zero-one loss l0−1 on a sample of size m. This is called the covering number of
w.r.t. zero-one loss l0−1 . The logarithm of this function is called the growth function
and is denoted by , i.e.,
(m) def
= ln ( (m)) .
Clearly, the growth function depends neither on the sample nor on the unknown
distribution PZ but only on the sample size m and the hypothesis space . Ideally,
this function would be calculated before learning and, as a consequence, we would
be able to calculate the second term of the confidence intervals (4.14) and (4.15).
Unfortunately, it is generally not possible to determine the exact value of the
function for an arbitrary hypothesis space and any m. Therefore one major
interest in the VC and PAC community is to obtain tight upper bounds on the
128 Chapter 4
growth function. One of the first such bounds is given by the following results
whose proof can be found in Appendix C.2.
Theorem 4.10 (Growth function bound and VC dimension) For any hypothesis
space , the growth function either
This result is fundamental as it shows that we can upper bound the richness
of the hypothesis space by an integer summary—the VC dimension. A lot
of research has been done to obtain tight upper bounds on the VC dimension
which has, by definition, the following combinatorial interpretation: If =
{{(x, y) ∈ | l0−1 (h (x) , y) = 1 } | h ∈ } is the induced set of events that a
hypothesis h ∈ labels (x, y) ∈ incorrectly, then the VC dimension ϑ of
is the largest natural number ϑ such that there exists a sample z ∈ ϑ of
size ϑ which can be subdivided in all 2ϑ different ways by (set) intersection with
. Then we say that shatters z. If no such number exists we say that the VC
dimension of or is infinite. Sometimes the VC dimension is also called the
shatter coefficient.
In order to relate the above bound on the growth function in terms of the
VC dimension to the confidence intervals (4.14) and (4.15) we make use of the
inequality given in Theorem A.105 which states that for all m > ϑ
ϑ ! em "ϑ
m
< . (4.19)
i=0
i ϑ
4 We shall omit the subscript of ϑ whenever the hypothesis space is clear from context.
129 Mathematical Models of Learning
0.15
1.5
ν 2m
complexity term
complexity term
+1 ν 2m
m ν
ln +1
m ν
ln
0.10
1.0
ν ν
m 5
m
0.05
0.5
0.00
0.0
0.0 0.2 0.4 ν 0.6 0.8 1.0 0.000 0.010 ν 0.020 0.030
m m
(a) (b)
! ! " "
ϑ
Figure 4.2 Growth of the complexity term m ln 2m
ϑ + 1 in the VC confidence
ϑ
interval (4.14) as a function of m . (a) On the whole interval [0, 1] the increase is clearly
ϑ
sub-linear. (b) For very small values of m < 30
1
the growth is almost linear.
Remark 4.11 (Sufficient training sample size) Using the upper bound (4.19) of
the upper bound (4.17) for the growth function we obtain for the confidence
interval (4.14) the following expression
ln 4 ϑ 2m
∀2m > ϑ :
R [h] ≤ Remp [h, z] + 8 δ
+ ln +1 ,
m m ϑ
by the constant factor of 8, we will have nontrivial results in these regimes. Vapnik
suggested this as a rule of thumb for the practicability of his bound. By the plots
in Figure 4.2 it is justifiable to say that, for mϑ > 30, the training sample size is
sufficiently large to guarantee a small generalization error of the empirical risk
minimization algorithm.
¾
¾
½
½
½
¿
n=1 n=2 n=3
Figure 4.3 Curse of dimensionality. In order to reliably estimate a density in Ên we sub-
divide the n–dimensional space into cells and estimate their probability by the frequency
that an example x ∈ x falls into it. Increasing the number of cells would increase the
precision of this estimate. For a fixed precision, however, the number of cells depends
exponentially on the number n of dimensions.
of margin maximization. First, having the norm of each normal vector w fixed,
margin maximization aims to minimize the margin loss lmargin given by equation
(2.42). Second, defining the hypothesis space to achieve a minimum real-valued
output of one at each training point, this makes data dependent and, thus,
inappropriate for theoretical studies. Nevertheless this formulation of the problem
is algorithmically advantageous.
Example 4.13 (VC dimension and parameters) Let us use the following three
examples to illustrate the difference between the dimensionality of parameter space
and the VC dimension (see Section 4.5 for references containing rigorous proofs).
131 Mathematical Models of Learning
1. Consider = Ê and
6 7
n
= x → sign
wi x i
sign (x) + w0
(w0 , w1 , . . . , wn ) ∈ Ê n+1 .
i=1
Clearly, all functions in h are monotonically increasing and have exactly one zero.
Thus the maximum size d of a training sample z that can be labeled in all 2d
different ways is one. This implies that the VC dimension of
is one. As this
holds regardless of n the VC dimension can be much smaller than the number of
parameters. It is worth mentioning that for all n ∈ Æ there exists a one-dimensional
parameterization of —each w ∈ Ê n+1 is represented by its zero—which, however,
the difficulty is to find a-priori.
2. Consider = Ê n and
= x → sign (
w, x)
w ∈ Ê n ,
def
where x = φ (x) for some fixed feature mapping φ : → ⊆ n2 (see Definition
2.2). Given a sample x =(x1 , . . . , xm ) of m objects we thus obtain the m × n data
matrix X = x1 ; . . . ; xm ∈ Ê m×n . If the training sample size m is bigger than
the number n of dimensions the matrix X has at most rank n, i.e., Xw = t has, in
general, no solution. It follows that the VC dimension can be at most n. In the case
of m = n, by choosing the training sample (x1 , . . . , xm ) such that xi = ei , we see
that Xw = Iw = w, that is, for any labeling y ∈ {−1, +1}m , we will find a vector
w ∈ Ê n that realizes the labeling. Therefore the VC dimension of linear classifiers
equals the number n of parameters.
3. Consider = Ê and
= {x → sign (sin (wx)) | w ∈ Ê } .
Through w we can parameterize the frequency of the sine and thus, for uniformly
spaced training samples x ∈ m of any size m, we will find 2m (extremely high)
values of w that label the m points in all 2m different ways. As a consequence the
VC dimension is infinite though we have only one parameter.
The analysis presented in the previous subsection revealed that the VC dimension
of is the fundamental quantity that controls the uniform convergence of empiri-
cal risks to expected risks and, as such, the generalization error of an empirical risk
132 Chapter 4
5 In the theory of multiple statistical tests, the resulting statistical procedure is often called a Bonferroni test.
133 Mathematical Models of Learning
s
>1− δPS (i) = 1 − δ . (by assumption)
i=1
0.6
0.5
0.4
VC complexity term
0.3
0.2
0.1
training error
0.0
0 5 10 15 20
model index
Figure 4.4 Structural risk minimization in action. Here we used hypothesis spaces i À
À À
such that ϑ i = i and i ⊆ i+1 . This implies that the training errors of the empirical
risk minimizers can only be decreasing which leads to the typical situation depicted. Note
that lines are used for visualization purposes because we consider only a finite set of Ë
hypothesis spaces.
Remark 4.15 (The role of PS ) The role of the numbers PS ( i ) seems somewhat
counterintuitive as we appear to be able to bias our estimate by adjusting these
parameters. The belief PS must, however, be specified in advance and represents
some apportionment of our confidence to the different points where failure might
occur. We recover the standard PAC and VC bound if PS is peaked at exactly one
hypothesis space. In the first work on SRM it was implicitly assumed that these
numbers are 1s . Another interesting aspect of PS is that, thanks to the exponential
term in Theorem 4.7 using a uniform measure PS we can consider up to (em )
different hypothesis spaces before deteriorating to trivial bounds.
Using structural risk minimization we are able to make the complexity, as measured
by the VC dimension of the hypothesis space, a variable of a model selection
135 Mathematical Models of Learning
algorithm while still having guarantees for the expected risks. Nonetheless, we
recall that the decomposition of the hypothesis space must be done independently
of the observed training sample z. This rule certainly limits the applicability of
structural risk minimization to an a-priori complexity penalization strategy. The
resulting bounds effectively ignore the sample z ∈ m except with regard to the
training error Remp [ (z) , z]. A prominent example of the misuse of structural
risk minimization was the first generalization error bounds for the support vector
machine algorithm. It has become commonly accepted that the success of support
vector machines can be explained through the structuring of the hypothesis space
of linear classifiers in terms of the geometrical margin γ z (w) of a linear classifier
having normal vector w (see Definition 2.30). Obviously, however, the margin itself
is a quantity that strongly depends on the sample z and thus a rigorous application
of structural risk minimization is impossible! Nevertheless, we shall see in the
following section that the margin is, in fact, a quantity which allows an algorithm
to control its generalization error.
In order to overcome this limitation we will introduce the luckiness framework.
The goals in the luckiness framework are to
In contrast to the VC and PAC framework the new uniform bound on the expected
risk R [h] of all hypotheses h ∈
is allowed to depend on the training sample z
and the single hypothesis h considered6 .
6 Note that a VC and PAC generalization error bound is implicitly dependent on the training error Remp [h, z].
136 Chapter 4
Given such a result we have automatically obtained a bound for the algorithm
which directly minimizes the ε L (|z| , δ, z, h), i.e.,
ε L
def
(z) = argmin ε L (|z| , δ, z, h) . (4.22)
h∈
Note that at present only PAC results for the zero-one loss l0−1 are available. Hence
we must assume that, for the training sample z, there exists at least one hypothesis
h ∈ such that Remp [h, z] = 0.
The additional information we exploit in the case of sample based decomposi-
tions of the hypothesis space is encapsulated in a luckiness function. The main
idea is to fix in advance some assumption about the measure PZ , and encode this
assumption in a real-valued function L defined on the space of training samples
z ∈ m and hypotheses h ∈ . The value of the function L indicates the extent
to which the assumption is satisfied for the particular sample and hypothesis. More
formally, this reads as follows.
The quantity L plays the central role in what follows. Intuitively speaking, for
a given training sample z and hypothesis h the level L (z, h) counts the number
of equivalence classes w.r.t. the zero-one loss l0−1 in which contain functions
g ∈ that are luckier or at least as lucky as h. The main idea of the luckiness
framework is to replace the coarse worst case argument—taking the covering num-
ber as the maximum number of equivalence classes with different losses for
137 Mathematical Models of Learning
The intuition behind this definition is that it captures when the luckiness can be
estimated from the training sample (z 1 , . . . , z m ) ∈ m with high probability.
We have to make sure that with small probability (at most δ) over the random
draw of a training and ghost sample there are more than ω (L ((z 1 , . . . , z m ) , h) , δ)
equivalence classes that contain functions that are luckier than h on the training
and ghost sample (z 1 , . . . , z m , z m+1 , . . . , z 2m ). Now we are ready to give the main
result in the luckiness framework.
2 4
R [h] ≤ d + ld . (4.23)
m δ
there might exist a collection of training samples z ∈ m such that, for all
measures PZ ,
Such collections are called positively biased relevant collections and can effec-
tively be used to tighten the confidence interval if the training sample z is wit-
nessing the prior belief expressed via positively biased relevant collections. Hence
it is necessary to detect if a given training sample z falls into one of the prese-
lected positively biased relevant collections. The function ω in Definition 4.18 can
be considered to serve exactly this purpose.
Before finishing this section we will give two examples of luckiness functions.
For further examples the interested reader is referred to the literature mentioned in
Section 4.5.
Example 4.21 (PAC luckiness) In order to show that the luckiness framework is,
in fact, a generalization of the PAC framework we consider the following luckiness
function L (z, h) = −ϑ where ϑ is the VC dimension of . Then, by the upper
bound given in Theorem A.105, we know that L is probably smooth w.r.t.
2em −L
ω (L , δ) = ,
−L
because the number of equivalence classes on a sample of size 2m can never exceed
that number. If we set pi = 1 if, and only if, i = ϑ we see that, by the luckiness
bound (4.23), simultaneously for all functions h that achieve zero training error
Remp [h, z] = 0
2 2em 4
R [h] ≤ ϑ ld + ld ,
m ϑ δ
which is, up to some constants, the same result as given by (4.15). Note that this
luckiness function totally ignores the sample z as mentioned in the context of the
classical PAC framework.
(z, j ) def
= max
l0−1 (h (x̃1 ) , ỹ1 ) , . . . , l0−1 h x̃ j , ỹ j | h ∈
.
z̃⊆z:| z̃ |= j
Note that the classical VC dimension is obtained if z contains all points of the
space . Then we show in Appendix C.4 that L (z, h) = −ϑeff (z) is probably
smooth w.r.t. the function
−4L−4 ln(δ)
em
ω (L , δ) = ,
−2L − 2 ln (δ)
for all δ ∈ 0, 12 . This shows that we can replace the VC dimension ϑ known
before the training sample arrives with the empirical VC dimension ϑ (z) after
having seen the data.
Remark 4.23 (Vanilla luckiness) The main luckiness result as presented in Theo-
rem 4.19 is a simplified version of the original result. In the full version the notion
of probable smoothness is complicated by allowing the possibility of exclusion of a
data-dependent fraction of the double sample before bounding the number of equiv-
alence classes of luckier functions H (h, z). As a consequence the data-dependent
fraction is added to the r.h.s. of equation (4.23). Using the more complicated luck-
iness result it can be shown that the margin γ z (w) of a linear classifier parame-
terized by w is a probably smooth luckiness function. However, in the next section
we shall present an analysis for linear classifiers in terms of margins which yields
better results than the results in the luckiness framework. It is worth mentioning
that for some distributions the margin γ z (w) of any classifier h w can be arbitrarily
small and thus the bound can be worse than the a-priori bounds obtained in the
classical PAC and VC frameworks.
equals the number given by the covering number (the exponentiated growth
function). In fact, assuming that this number of equivalence classes is attained by
the sample z worst , this happens to be the case if PZm (z worst) = 1.8
On the other hand, in the case of linear classifiers, i.e., x →
x, w where
x = φ (x) and φ : → ⊆ n2 (see also Definition 2.2), it seems plausible that
def
the margin, that is, the minimal real-valued output before thresholding, provides
confidence about the expected risk. Taking the geometrical picture given in Figure
2.1 on page 23 into account we see that, for a given training sample z ∈ m , the
covering number on that particular sample is the number of different polyhedra
on the surface of the unit hypersphere. Having attained a functional margin of
γ̃ z (w) (which equals γ z (w) if w = 1) when using h w (x) = sign (
x, w) for
classification, we know that we can inscribe a ball of radius at least γ̃ z (w) in one of
the equivalence classes—the version space (see also Subsection 2.4.3). Intuitively
we are led to ask “how many equivalence classes can maximally be achieved if we
require the margin to be γ̃ z (w) beforehand?”. Ideally, we would like to use this
number in place of the number . The margin γ̃ z (w) is best viewed as the scale
at which we look on the hypothesis space of real-valued functions. If the margin
is at least γ then two functions are considered to be equivalent if their real-valued
outputs differ by not more than γ on the given training sample z because they must
correspond to the same classification which is carried out by thresholding the real-
valued outputs. The scale sensitive version of the covering number when using
real-valued functions f ∈ for classification learning is defined as follows.
8 Since we already assumed that the training sample z worst is iid w.r.t. a fixed distribution PZ , tightness of the
growth function based bounds is only achieved if
PZm (z worst ) = 1 .
But, if there is only one training sample z worst this is impossible due to the well known “concentration of measure
phenomenon in product spaces” (see Talagrand (1996)).
142 Chapter 4
1.0
1.0
0.5
0.5
f(x2)
f(x)
0.0
0.0
−0.5
−0.5
−1.0
1.0 1.2 1.4 1.6 1.8 2.0 −1.0 −0.5 0.0
f(x1)
0.5 1.0
x
Figure 4.5 (Left) 20 real-valued function (solid lines) together with two training points
x 1 , x 2 ∈ Ê (crosses). The 2 are given by f (x) = α1 k (x 1 , x)+α2 k (x 2 , x) where α
functions
is constrained to fulfill α Gα ≤ 1 (see Definition 2.15) and k is given by the RBF kernel
(see Table 2.1). (Right) A cover Fγ ((x 1 , x 2 )) for the function class (not the smallest). In
the simple case of m = 2 each function f ∈ is reduced to two scalars f (x 1 ) and f (x 2 )
and can therefore be represented as a point in the plane. Each big black dot corresponds
to a function fˆ in the cover Fγ ((x 1 , x 2 )); all the gray dots in the box of side length 2γ
correspond to the function covered.
∞ (γ , m) def
= sup ∞ (γ , x) .
x∈ m
At first glance, it may seem odd that we consider only the scale of half the
observed margin γ̃ z (w) and a covering number for a double sample of size 2m.
These are technical requirements which might be resolved using a different proving
technique. Note that the covering number ∞
(γ , m) is independent of the sample
z ∈ which allows us to define a function9 e : → Ê such that
m
∞
e (d) = min γ ∈ Ê +
⇒ ∞
def
(γ , 2m) ≤ 2d (e (d) , 2m) ≤ 2d , (4.24)
that is, e (d) is the smallest margin which ensures that the covering number
∞ (e (d) , 2m) is less than or equal to 2d . Note that we must assume that the
minimum γ ∈ Ê+ will be attained. Hence, the condition ∞
(γ̃ z (w) /2, 2m) ≤ 2d
is equivalent to γ̃ z (w) ≥ 2 · e (d). Now, in order to bound the probability of the
above mentioned event we proceed in a similar manner to the PAC analysis.
9 This function is also known as the dyadic entropy number (see also Appendix A.3.1).
144 Chapter 4
½
½
¾
½ ¾
¾ ´µ ½ ¾
º
noticing that this last step is the point where we use the observed margin γ̃ z (w)
to boil down the worst case number (when only considering the binary valued
functions) to the number 2d that needs to be witnessed by the observed margin
γ̃ z (w).
εm
Using the fact that for all d ∈ + , 2d− 2 ≥ 1 whenever mε ≤ 2, we have shown
the following theorem.
An immediate consequence is, that with probability at least 1 − δ over the random
draw of the training sample z ∈ m , the following statement ϒi (z, m, δ) is true
∞ γ̃ z (w)
∨
2 2
∀h w ∈ V (z) : R [h w ] ≤ i + ld , 2m > 2i .
m δ 2
Noticing that the bound becomes trivial for i > #m/2$ (because the expected risk is
at most one) we can safely apply the multiple testing lemma 4.14 with uniform PS
over the natural numbers i ∈ {1, . . . , #m/2$}. Thus we have shown the following
powerful corollary of Theorem 4.25.
from above by
? @
∞ γ̃ z (w)
ld
2 1
R [h w ] ≤ , 2m + ld (m) + ld . (4.26)
m 2 δ
Although this result cannot immediately be used to uniformly bound the expected
risk of h w we see that maximizing the margin γ̃ z (w) will minimize the upper bound
on the expected error R [h w ]. Thus it justifies the class of large margin algorithms
introduced in Chapter 2.
Remark 4.27 (Bounds using the empirical covering number) By a more care-
ful analysis it is possible to show that we can use the empirical covering number
∞ (γ̃ z (w) /2, x) in place of the worst case covering number ∞ (γ̃ z (w) /2, 2m)
where x ∈ m is the observed sample of m inputs. This, however, can only be
achieved at the price of less favorable constants in the bound because we do not
observe a ghost sample and therefore must use the training sample z ∈ m to es-
timate ∞
(γ̃ z (w) /2, 2m). Further, for practical application of the result, it still
remains to characterize the empirical covering number ∞
(γ̃ z (w) /2, x) by an
easy-to-compute quantity of the sample z ∈ . m
It would be desirable to make practical use of equation (4.26) for bounds similar to
those given by Theorem 4.7. This is not immediately possible, the problem being
determining ∞
for the observed margin. This problem is addressed using a one
integer summary which, of course, is now allowed to vary for the different scales
γ . Therefore, this summary is known as generalization of the VC dimension for
real-valued functions.
´µ
´µ
½
½
¹½
¼
Figure 4.7 (Left) Two points x 1 and x 2 on the real line. The set is depicted by the
functions = { f 1 , . . . , f4 }. (Right) The maximum γ ≈ 0.37 (vertical bar) we can
consider for γ -shattering is quite large as we can shift the functions by different values
r1 and r2 for x 1 and x 2 , respectively. The shifted set − r2 for x 2 is shown by dashed
lines. Note that f 1 − ri , f 2 − ri , f3 − ri and f4 − ri realize y = (−1, −1), y = (−1, +1),
y = (+1, −1) and y = (+1, +1), respectively.
In order to see that the fat shattering dimension is clearly a generalization of the VC
dimension we note that, for γ → 0, the fat shattering dimension limγ →0 fat (γ )
equals the VC dimension ϑ of the thresholded set = {sign ( f ) | f ∈ } of
binary classifiers. By using the scale parameter γ ∈ Ê + we are able to study the
complexity of a set of real-valued functions proposed for binary classification at
a much finer scale (see also Figure 4.7). Another advantage of this dimension is
that, similarly to the VC and PAC theory presented in Section 4.2, we can use it
to bound the only quantity
entering the bound (4.26)—the log-covering number
ld ∞
(γ̃ z (w) /2, 2m) . In 1997, Alon et al. proved the following lemma as a
byproduct of a more general result regarding the characterization of Glivenko-
Cantelli classes.
This bound is very similar to the bound presented in Theorem 4.10. The VC dimen-
sion ϑ has been replaced by the corresponding value fat (γ ) of the fat shattering
dimension. The most important difference is the additional ld 4m (b − a)2 /γ 2
term the necessity of which is still an open question in learning theory. The lemma
is not directly applicable to the general case of real-valued functions f ∈ be-
cause these may be unbounded. Thus the idea is to truncate the functions into a
range [−τ, +τ ] by the application of a truncation operator Tτ , i.e.,
τ if f (x) > τ
Tτ ( ) = {Tτ ( f ) | f ∈ } , Tτ ( f ) (x) =
def def
f (x) if − τ ≤ f (x) ≤ τ .
−τ if f (x) < −τ
Obviously, for all possible scales γ ∈ Ê + we know that the fat shattering dimen-
sion fatTτ ( ) (γ ) of the truncated set of functions is less than or equal to the fat
shattering dimension fat (γ ) of the non-truncated set since every sample that
is γ –shattered by Tτ ( ) can be γ –shattered by , trivially, using the same but
nontruncated functions. As a consequence we know that, for any value τ ∈ Ê + we
might use for truncation, it holds that the log-covering number of the truncated set
Tτ ( ) of functions can be bounded in terms of the fat shattering dimension of
and the value of τ
∞ !γ " 16mτ 2
ld Tτ ( ) (γ , m) ≤ 1 + fat
4emτ
ld ld .
4 fat γ4 · γ γ2
2 · ẽ (d) which, together with Lemma 4.29, implies that the log-covering number
149 Mathematical Models of Learning
ld(∞
Tẽ(d) ( ) (γ̃ z (w)/2, 2m)) cannot exceed
γ̃ z (w) 8em · ẽ (d) ld 32m · (ẽ (d))
2
1 + fat ld ! " ! "2
8 fat γ̃ z (w)
8
· γ̃ z (w)
2
γ̃ z (w)
2
γ̃ z (w) 8em
≤ 1 + fat ld ld (32m) .
8 fat (γ̃ z (w) /8)
b(γ̃ z (w))
In other words, by Lemma 4.29 we know that whenever the training sample z ∈ m
and the weight vector w ∈ under consideration satisfy b (γ̃ z (w)) ≤ d then
the log-covering number ld(∞ Tẽ(d) ( ) (γ̃ z (w)/2, 2m)) is upper bounded by d. By
Theorem 4.25 it follows that, with probability at least 1 − δ over the random draw
of the training sample z ∈ m , the statement
2 2
ϒi (z, m, δ) ≡ ∀h w ∈ V (z) : (b (γ̃ z (w)) > i) ∨ R [h w ] ≤ i + ld
m δ
A B
is true. As a consequence, stratifying over the m2 different natural numbers i
using the multiple testing lemma 4.14 and a uniform PS gives Theorem 4.30.
Notice that by the assumptions of Lemma 4.29 the margin γ̃ z (w) must be such
that fat (γ̃ z (w) /8) is less than or equal to 2m.
Ignoring constants, it is worth noticing that compared to the original PAC bound
given by equation (4.15) we have an additional ld (32m) factor in the complexity
term of equation (4.27) which is due to the extra term in Lemma 4.29. Note that in
contrast to the classical PAC result, we do not know beforehand that the margin—
150 Chapter 4
Using Lemma 4.29 we reduced the problem of bounding the covering number ∞
Lemma 4.31 (Fat shattering bound for linear classifiers) Suppose that X =
{x ∈ | x ≤ ς } is a ball of radius ς in an inner product space and con-
sider the linear classifiers
= {x →
w, x | w ≤ B , x ∈ X }
The proof can be found in Appendix C.5. In terms of Figure 2.6 we see that (4.28)
has an intuitive interpretation: The complexity measured by fat at scale γ must
be viewed with respect to the total extent of the data. If the margin has a small
absolute value, its effective incurred complexity is large only if the extent of the
data is large. Thus, for linear classifiers, the geometrical margin10 γ z (w) itself does
not provide any measure of the complexity without considering the total extent of
the data. Combining Lemma 4.31 with the bound given in Theorem 4.30 we obtain
a practically useful result for the expected risk of linear classifiers in terms of the
observed margin. Note that we must ensure that fat (γ z (w) /8) is at most 2m.
Theorem 4.32 (PAC Margin bound) Suppose is a given feature space. For all
probability measures PZ such that PX ({x | φ (x) ≤ ς }) = 1, for any δ ∈ (0, 1],
with probability at least 1−δ over the random draw of the training sample z ∈ m ,
if we succeed in correctly classifying m samples
√ z with a linear classifier f w having
a geometrical margin γ z (w) of at least 32/mς , then the expected risk R [h w ] of
10 Note that for w = 1 functional margin γ̃ z (w) and geometrical margin γ z (w) coincide.
151 Mathematical Models of Learning
This result is the theoretical basis of the class of large margin algorithms as it
directly allows us to make use of the attained geometrical margin γ z (w) for giving
bounds on the expected risk R [h w ] of a linear classifiers. An appealing feature
of the result is the subsequent capability of obtaining nontrivial bounds on the
expected risk even when the number n of dimensions of feature space is much
larger than the number m of training examples. Whilst this is impossible to achieve
in the parametric statistics approach we see that by directly studying the expected
risk we are able to defy the curse of dimensionality.
Remark 4.33 (Sufficient training sample size) At first glance the bound (4.29)
might represent progress. We must recall, however, that the theorem requires
that the attained margin γ z (w) satisfies m (γ z (w))2 /ς 2 ≥ 32. Noticing that
(ς/γ z (w))2 can be viewed as an effective VC dimension ϑeff we see that this is
equivalent to assuming that dmeff ≥ 32—the rule of thumb already given by Vapnik!
However, calculating the minimum training sample size m for a given margin com-
plexity ϑeff = (ς/γ z (w))2 , we see that equation (4.29) becomes nontrivial, i.e., less
than one, only for astronomically large values of m, e.g., m > 34 816 for ϑeff = 1
(see Figure 4.8). Thus it can be argued that Theorem 4.32 is more a qualitative
justification of large margin algorithms than a practically useful result. We shall
see in Section 5.1 that a conceptually different analysis leads to a similar bound
for linear classifiers which is much more practically useful.
A major drawback of the margin bound given by Theorem 4.32 is its sensitivity
to a few training examples (xi , yi ) ∈ z ∈ m for which the margin γi (w) of a
linear classifier h w may be small. In the extreme case we can imagine a situation
in which the first m − 1 training examples from z are correctly classified with a
maximum margin of γi (w) = ς but the last observation has γm (w) = 0. It does
not seem plausible that this single point has such a large impact on the expected
risk of h w that we are unable to give any guarantee on the expected risk R [h w ].
Algorithmically we have already seen that this difficulty can easily be overcome by
the introduction of soft margins (see Subsection 2.4.2). As a consequence, Shawe-
152 Chapter 4
Figure 4.8 Minimal training sample size as a function of the margin complexity ς 2 /γ 2
such that equation (4.29) becomes less than the one (ignoring the ld (2/δ) term due to the
astronomically large values of m).
Taylor and Cristianini called the existing margin bound “nonrobust”. The core
idea involved in making the margin bound (4.29) “robust” is to construct an inner
product space from a given feature space ⊆ n such that, for a linear classifier
2
h w that fails to achieve only positive margins γi (w) on the training sample z, we
can find a corresponding linear classifier h w̃ in the inner product space achieving
a positive margin γ z (w) on the mapped training sample whilst yielding the same
classification as h w for all unseen test objects. One way to achieve this is as follows:
1. Based on the given input space and the feature space with the associated
mapping φ : → for each training sample size m we set up a new inner product
space
6 j 7
= ×
def
Ix
j ∈ {1, . . . , m} , x1 , . . . , x j ∈
i
j
i=1
where the second term on the r.h.s. of (4.30) is well defined because we only
consider functions that are non-zero on finitely many (at most m) points. The
inner product space
can be set up independently of a training sample z. Given a
positive value > 0, each point xi ∈ x is mapped to
by τ (xi ) def
= xi , Ixi .
Further, for each example (x, y) ∈ / z not contained in the training sample we see
that the real-valued output of the classifier ω,γ (w) equals the real-valued output
of the unmodified weight vector w, i.e.,
y ω,γ (w) , τ (x) = y
w, x + y yi · d ((xi , yi ) , w, γ ) · Ixi Ix
(x i ,yi )∈z
= y
w, x .
Hence we can use ω,γ (w) to characterize the expected risk of w but at the same
time exploit the fact that ω,γ (w) achieves margin of at least γ in the inner product
space.
3. Let us assume that PZ is such that PX ({x ∈ | x ≤ ς }) = 1. In order to
apply Theorem 4.32 for ω,γ (w) and the set {τ (x) | x ∈ } we notice that, for
a given value of γ and ,
154 Chapter 4
where -
def
D (z, w, γ ) = (d ((xi , yi ) , w, γ ))2 . (4.31)
(x i ,yi )∈z
Note that (D (z, w, 1))2 exactly captures the squared sum of the slack variables
in the soft margin support vector machine algorithm given by (2.49).
(
(b) all mapped points are contained in a ball of radius ς 2 + 2 because
∀x ∈ : τ (x)2
= x + ≤ ς + .
2 2 2 2
Thus by an application of Lemma 4.31 to a classifier ω,γ (w) we have shown the
following lemma12 .
Lemma 4.34 (Margin distribution) Suppose is a given feature space. For all
> 0, for all probability measures PZ such that PX ({x ∈ | x ≤ ς }) = 1,
for any δ ∈ (0, 1], with probability at least 1 − δ over the random draw of the
training sample z ∈ m , for all γ ∈ (0, ς ] the expected risk R [h w ] of a linear
classifier h w w.r.t. the zero-one loss l0−1 is bounded from above by
? @
2 8em 2m
R [h w ] ≤ deff () ld ld (32m) + ld ,
m deff () δ
where
! "2
D(z,w,γ )
64 w + 2
ς 2 + 2
deff () = (4.32)
γ2
must obey deff () ≤ 2m.
Note that the term D (z, w, γ ) given in equation (4.31) is not invariant under
rescaling of w. For a fixed value of γ increasing the norm w of w can only
lead to a decrease in the term D (z, w, γ ). Thus, without loss of generality, we will
fix w = 1 in the following exposition.
12 With a slight lack of rigor we omitted the condition that there is no discrete probability PZ on misclassified
training examples because ω,γ (w) characterizes w only at non-training examples.
155 Mathematical Models of Learning
Theorem 4.35 (Robust margin bound) Suppose ⊆ n2 is a given feature space.
For all probability measures PZ such that PX ({x ∈ | x ≤ ς }) = 1, for any
δ ∈ (0, 1], with probability at least 1 − δ over the random draw of the training
sample z ∈ m , for all γ ∈ (0, ς ] the expected risk R [h w ] w.r.t. the zero-one loss
l0−1 of a linear classifier h w with w = 1 is bounded from above by
? @
2 8em (16 + ld (m)) m
R [h w ] ≤ deff ld ld (32m) + ld , (4.33)
m deff δ
where
65 (ς + 3D (z, w, γ ))2
deff =
γ2
must obey deff ≤ 2m.
Note that,
by
√ application
of Lemma 4.14, we only gain an additional summand of
3 + ld ld m in the numerator of equation (4.33). Coming back to our initial
example we see that, in the case of m − 1 examples correctly classified with a
(maximum) geometrical margin of γi (w) = ς and the mth example misclassified
by a geometrical margin of 0, Theorem 4.35 gives us an effective dimensionality
deff of 65 · 16 = 1040 and thus, for sufficiently large training sample size m,
we will get a nontrivial bound on the expected risk R [h w ] of h w although h w
admits training errors. Note, however, that the result is again more a qualitative
justification of soft margins as introduced in Subsection 2.4.2 rather than being
practically useful (see also Remark 4.33). This, however, is merely due to the fact
156 Chapter 4
that we set up the “robustness” trick on top of the fat shattering bound given in
Theorem 4.30.
Remark 4.36 (Justification of soft margin support vector machines) One of the
motivations for studying robust margin bounds is to show that the soft margin
heuristic introduced for support vector machines has a firm theoretical basis. In
order to see this we note that in the soft margin case the norm w of the re-
sulting classifier is not of unit length as we fixed the functional margin to be one.
Therefore, we consider the case of γ = w1 and wnorm = w w
which gives
2 m . F G >2
1 1 w
D z, wnorm , = max 0, − yi , xi
w i=1
w w
1 m
= (max {0, (1 − yi
w, xi )})2
w2 i=1
1 m
1 m
= l quad (
w, xi , yi ) = ξi2 ,
w2 i=1 w2 i=1
according to the slack variables ξi introduced in equation (2.48) and (2.49). For
the effective dimensionality deff it follows
2
m
3 2
deff = 65 w2 ς + ξi2 = 65 ς w + 3 ξ 2 (4.34)
w i=1
2
≤ 65 ς w + 3 ξ 1 , (4.35)
where we use the fact that ξ 2 ≤ ξ 1 . Since by the assumption that ξi > 0 we
know ξ 1 = m i=1 ξi and, thus, equation (4.35) and (4.34) are somewhat similar
to the objective function minimized by the optimization problems (2.48) and (2.49).
Combining equation (4.36) and (4.37) we have shown the following theorem for
adaptive margin machines.
of learning logic formulas assuming that the hypothesis space contains the tar-
get formula. Hence all uncertainty is due to the unknown input distribution13 PX .
The restriction to logic formulas also simplified the matter because the number of
hypotheses then becomes finite even though it grows exponentially in the number
of binary features. Since then a number of generalizations have been proposed by
dropping the assumption of finite hypothesis spaces and realizability, i.e., the “or-
acle” draws its target hypothesis h ∗ from the hypothesis space which we use
for learning (see Blumer et al. (1989) and Anthony (1997) for a comprehensive
overview). The latter generalization became known as the agnostic PAC frame-
work (Kearns et al. 1992). Though we have ignored computational complexity and
computability aspects, the PAC model in its pure form is also concerned with these
questions.
Apart from these developments, V. Vapnik and A. Chervonenkis already studied
the general convergence question in the late 1960s. In honor of them, their frame-
work is now known as the VC (Vapnik-Chervonenkis) framework. They showed
that the convergence of expected risks is equivalent to the uniform convergence
of frequencies to probabilities over a fixed set of events (Vapnik and Chervo-
nenkis 1991) (see Vapnik (1998, Chapter 16) for a definition of “nontrivial” hy-
pothesis spaces and Bartlett et al. (1996) for a constructive example). This equiv-
alence is known as the key theorem in learning theory. The answer to a particular
case of this problem was already available through the Glivenko-Cantelli lemma
(Glivenko 1933; Cantelli 1933) which says that the empirical distribution function
of a one dimensional random variable converges uniformly to the true distribution
function in probability. The rate of convergence was proven for the first time in
Kolmogorov (1933). Vapnik and Chervonenkis generalized the problem and asked
themselves which property a set of events must share such that this convergence
still takes place. As a consequence, these sets of events are known as Glivenko-
Cantelli classes. In 1987, M. Talagrand obtained the general answer to the problem
of identifying Glivenko-Cantelli classes (Talagrand 1987). Ten years later this re-
sult was independently rediscovered by Alon et al. (1997). It is worth mentioning
that most of the results in the PAC framework are particular cases of more general
results already obtained by Vapnik and coworkers two decades before.
The main VC and PAC bounds given in equations (4.11) and (4.12) were first
proven in Vapnik and Chervonenkis (1974) and effectively differ by the exponent
at the deviation of ε. In Vapnik (1982, Theorem 6.8) it is shown that this expo-
nent continously varies from 2 to 1 w.r.t. the smallest achievable expected risk
13 In the original work of Valiant he used the term oracle to refer to the PX .
160 Chapter 4
infh∈ R [h] (see also Lee et al. (1998) for tighter results in the special case of
convex hypothesis spaces). The VC and PAC analysis revealed that, for the case
of learning, the growth function of a hypothesis space is an appropriate a-priori
measure of its complexity. As the growth function is very difficult to compute,
it is often characterized by a one-integer summary known as VC dimension (see
Theorem 4.10 and Sontag (1998) for an excellent survey of the VC dimension).
The first proof of this theorem is due to Vapnik and Chervonenkis (1971) and was
discovered independently in Sauer (1972) and Shelah (1972); the former credits
Erdös with posing it as a conjecture. In order to make the VC dimension a vari-
able of the learning algorithm itself two conceptually different approaches were
presented: By defining an a-priori structuring of the hypothesis space—sometimes
also referred to as a decomposition of the hypothesis space (Shawe-Taylor et al.
1998)—it is possible to provide guarantees for the generalization error with high
confidence by sharing the confidence among the different hypothesis spaces. This
principle, known as structural risk minimization, is due to Vapnik and Chervo-
nenkis (1974). A more promising approach is to define an effective complexity via
a luckiness function which encodes some prior hope about the learning problem
given by the unknown PZ . This framework, also termed the luckiness framework is
due to Shawe-Taylor et al. (1998). For more details on the related problem of con-
ditional confidence intervals the interested reader is referred to Brownie and Kiefer
(1977), Casella (1988), Berger (1985) and Kiefer (1977). All examples given in
Section 4.3 are taken from Shawe-Taylor et al. (1998). The luckiness framework is
most advantageous if we refine what is required from a learning algorithm: A learn-
ing algorithm is given a training sample z ∈ m and a confidence δ ∈ (0, 1],
and is then required to return a hypothesis (z) ∈ together with an accuracy
ε such that in at least 1 − δ of the learning trials the expected risk of (z) is
less than or equal to the given ε. Y. Freund called such learning algorithms self
bounding learning algorithms (Freund 1998). Although, without making explicit
assumptions on PZ , all learning algorithms might be equally good, a self bounding
learning algorithm is able to tell the practitioner when its implicit assumptions are
met. Obviously, a self bounding learning algorithm can only be constructed having
a theoretically justified generalization error bound available.
In the last section of this chapter we presented a PAC analysis for the particular
hypothesis space of linear classifiers making extensive use of the margin as a
data dependent complexity measure. In Theorem 4.25 we showed that the margin,
that is, the minimum real-valued output of a linear classifier before thresholding,
allows us to replace the coarse application of the union bound over the worst case
diversity of the binary-valued function class by a union bound over the number of
161 Mathematical Models of Learning
equivalence classes witnessed by the observed margin. The proof of this result can
also be found in Shawe-Taylor and Cristianini (1998, Theorem 6.8) and Bartlett
(1998, Lemma 4). Using a scale sensitive version of the VC dimension known as
the fat shattering dimension (Kearns and Schapire 1994) we obtained bounds on the
expected risk of a linear classifier which can be directly evaluated after learning.
An important tool was Lemma 4.29 which can be found in Alon et al. (1997).
The final step was an application of Lemma 4.31 which was proven in Gurvits
(1997) and later simplified in Bartlett and Shawe-Taylor (1999). It should be noted,
however, that the application of Alon’s result yields bounds which are practically
irrelevant as they require the training sample size to be of order 105 in order to
be nontrivial. Reinterpreting the margin we demonstrated that this margin bound
directly gives a bound on the expected risk involving a function of the margin
distribution. This study closely followed the original papers Shawe-Taylor and
Cristianini (1998) and Shawe-Taylor and Cristianini (2000). A further application
of this idea showed that although not containing any margin complexity, adaptive
margin machines effectively minimize the complexity of the resulting classification
functions. Recently it has been demonstrated that a functional analytic viewpoint
offers ways to get much tighter bounds on the covering number at the scale of
the observed margin (see Williamson et al. (2000), Shawe-Taylor and Williamson
(1999), Schölkopf et al. (1999) and Smola et al. (2000)).
5 Bounds for Specific Algorithms
This chapter presents a theoretical study of the generalization error of specific algo-
rithms as opposed to uniform guarantees about the expected risks over the whole
hypothesis space. It starts with a PAC type or frequentist analysis for Bayesian
learning algorithms. The main PAC-Bayesian generalization error bound measures
the complexity of a posterior belief by its evidence. Using a summarization prop-
erty of hypothesis spaces known as Bayes admissibility, it is possible to apply the
main results to single hypotheses. For the particular case of linear classifiers we
obtain a bound on the expected risk in terms of a normalized margin on the train-
ing sample. In contrast to the classical PAC margin bound, the new bound is an
exponential improvement in terms of the achieved margin. A drawback of the new
bound is its dependence on the number of dimensions of feature space.
In order to study more conventional machine learning algorithms the chapter
introduces the compression framework. The main idea here is to take advantage
of the fact that, for certain learning algorithms, we can remove training examples
without changing its behavior. It will be shown that the intuitive notion of compres-
sion coefficients, that is, the fraction of necessary training examples in the whole
training sample, can be justified by rigorous generalization error bounds. As an ap-
plication of this framework we derive a generalization error bound for the percep-
tron learning algorithm which is controlled by the margin a support vector machine
would have achieved on the same training sample. Finally, the chapter presents a
generalization error bound for learning algorithms that exploits the robustness of a
given learning algorithm. In the current context, robustness is defined as the prop-
erty that a single extra training example has a limited influence on the hypothesis
learned, measured in terms of its expected risk. This analysis allows us to show that
the leave-one-out error is a good estimator of the generalization error, putting the
common practice of performing model selection on the basis of the leave-one-out
error on a sound theoretical basis.
164 Chapter 5
Up to this point we have investigated the question of bounds on the expected risk
that hold uniformly over a hypothesis space. This was done due to the assumption
that the selection of a single hypothesis on the basis of the training sample z ∈
m is the ultimate goal of learning. In contrast, a Bayesian algorithm results in
(posterior) beliefs PH|Zm =z over all hypotheses. Based on the posterior measure
PH|Zm =z different classification strategies are conceivable (see Subsection 3.1.1
for details). The power of a Bayesian learning algorithm is in the possibility
of incorporating prior knowledge about the learning task at hand via the prior
measure PH . Recently D. McAllester presented some so-called PAC-Bayesian
theorems which bound the expected risk of Bayesian classifiers while avoiding
the use of the growth function and related quantities altogether. Unlike classical
Bayesian analysis—where we make the implicit assumption that the unknown
measure PZ of the data can be computed from the prior PH and the likelihood
PZ|H=h by EH PZ|H=h —these results hold for any distribution PZ of the training
data and thus fulfill the basic desiderata of PAC learning theory. The key idea
to obtain such results is to take the concept of structural risk minimization to its
extreme—where each hypothesis space contains exactly one hypothesis. A direct
application of the multiple testing lemma 4.14 yields bounds on the expected
risk for single hypotheses, which justify the use of the MAP strategy as one
possible learning method in a Bayesian framework. Applying a similar idea to
subsets of the hypothesis space then results in uniform bounds for average
classifications as carried out by the Gibbs classification strategy. Finally, the use of
a simple inequality between the expected risk of the Gibbs and Bayes classification
strategies completes the list of generalization error bounds for Bayesian algorithms.
It is worth mentioning that we have already used prior beliefs in the application of
structural risk minimization (see Subsection 4.2.3).
In this section we present generalization error bounds for the three Bayesian
classification strategies presented in Subsection 3.1.1. We shall confine ourselves to
the PAC likelihood defined in Definition 3.3 which, in a strict Bayesian treatment,
corresponds to the assumption that the loss is given by the zero-one loss l0−1 . Note,
however, that the main ideas of the PAC-Bayesian framework carry over far beyond
this simple model (see Section 5.4 for further references).
165 Bounds for Specific Algorithms
which holds with probability at least 1 − δ over the random draw of the training
sample z ∈ m . Hence, applying Lemma 4.14 with PS = PH we have proven our
first PAC-Bayesian result.
Theorem 5.1 (Bound for single hypotheses) For any measure PH and any mea-
sure PZ , for any δ ∈ (0, 1], with probability at least 1 − δ over the random draw of
the training sample z ∈ m for all hypotheses h ∈ V (z) that achieve zero train-
ing error Remp [h, z] = 0 and have PH (h) > 0, the expected risk R [h] is bounded
from above by
1 1 1
R [h] ≤ ln + ln . (5.2)
m PH (h) δ
This bound justifies the MAP estimation procedure because, by assumption of the
PAC likelihood for each hypothesis h not in version space V (z), the posterior
measure PH|Zm =z (h) vanishes due to the likelihood term. Thus, the posterior mea-
sure PH|Zm =z is merely a rescaled version of the prior measure PH , only positive
inside version space V (z). Hence, the maximizer MAP (z) of the posterior mea-
sure PH|Zm =z must be the hypothesis with maximal prior measure PH which is, at
the same time, the minimizer of equation (5.2).
Considering the Gibbs classification strategy given in Definition 3.8 we see that,
due to the non-deterministic classification function, the expected risk of Gibbs z
166 Chapter 5
PH (H ∈ H (z)) ∧ R H > ε α
PH|H∈H (z) R H > ε = < .
PH (H (z)) PH (H (z))
Finally, choosing α = PH (H (z)) /m
and β = 1/m, as well as exploiting the
fact that the function PH|H∈H (z) R H > ε is monotonically increasing in ε, it is
readily verified that, with probability at least 1 − δ over the random draw of the
training sample z ∈ m ,
1 1
R Gibbs H ( z) ≤ ε · 1 − +
m m
1 1 1
= ln + 2 ln (m) + ln +1 .
m PH (H (z)) δ
Thus we have shown our second PAC-Bayesian result.
Theorem 5.2 (Bound for subsets of hypotheses) For any measure PH and any
measure PZ , for any δ ∈ (0, 1], with probability at least 1 − δ over the random
draw of the training sample z ∈ m for all subsets H (z) ⊆ V (z) such that
PH (H (z)) > 0, the expected risk of the associated Gibbs classification strategy
Gibbs H (z) is bounded from above by
1 1 1
R Gibbs H ( z) ≤ ln + 2 ln (m) + ln +1 . (5.5)
m PH (H (z)) δ
Finally, in order to obtain a PAC-Bayesian bound on the expected risk of the Bayes
classification strategy given in Definition 3.7 we make use of the following simple
lemma.
168 Chapter 5
Lemma 5.3 (Gibbs-Bayes lemma) For any measure PH|Zm =z over hypothesis
space ⊆ and any measure PXY over data space × = , for all
training samples z ∈ m and the zero-one loss l0−1
R [Bayes z ] ≤ | | · R [Gibbs z ] . (5.6)
Proof For any training sample z ∈ m and associated measure PH|Zm =z consider
the set
Z z = {(x, y) ∈ | l0−1 (Bayes z (x) , y) = 1 } .
For all points (x, y) ∈ / Z z in the complement, the r.h.s. of equation (5.6) is
zero and thus the bound holds. For all points (x, y) ∈ Z z the expectation value
EH|Zm =z l0−1 (H (x) , y) (as considered for the Gibbs classification strategy) will
be at least |1 | because Bayes z (x) makes, by definition, the same classification
as the majority of the h’s weighted by PH|Zm =z . As there are | | different classes
the majority has to have a measure of at least |1 | . Thus, multiplying this value by
| | upper bounds the loss of one incurred on the l.h.s. by Bayes z . The lemma is
proved.
A direct application of this lemma to Theorem 5.2 finally yields our third PAC-
Bayesian result.
Theorem 5.4 (Bound for the Bayes classification strategy) For any measure PH
and any measure PZ , for any δ ∈ (0, 1], with probability at least 1 − δ over the
random draw of the training sample z ∈ m , for all subsets H (z) ⊆ V (z)
such that PH (H (z)) > 0 the expected risk of the generalized Bayes classification
strategy Bayes H (z) given by
y∈
Again, H (z) = V (z) minimizes the bound (5.7) and, as such, theoretically
justifies the Bayes optimal decision using the whole of version space without
assuming the “correctness” of the prior. Note, however, that the bound becomes
trivial as soon as PH (V (z)) ≤ exp (−m/ | |). An appealing feature of these
169 Bounds for Specific Algorithms
bounds is given by the fact that their complexity PH (V (z)) vanishes in the
most “lucky” case of observing a training sample z such that all hypotheses are
consistent with it.
If we have chosen too “small” a hypothesis space beforehand there might
not even exist a single hypothesis consistent with the training sample; if, on the
other hand, the hypothesis space contains many different hypothesis the prior
probability of single hypotheses is exponentially small. We have already seen this
dilemma in the study of the structural risk minimization framework (see Subsection
4.2.3).
It is worth mentioning that the three results presented above are based on the
assertion given in equation (5.1). This (probabilistic) bound on the expected risk of
hypotheses consistent with the training sample z ∈ m is based on the binomial tail
bound. If we replace this starting point with the corresponding assertion obtained
from Hoeffding’s inequality, i.e.,
-
ln 1δ
ϒi (z, m, δ) ≡ R [h i ] − Remp [h i , z] ≤
2m
and perform the same steps as before then we obtain bounds that hold uniformly
over the hypothesis space (Theorem 5.1) or for all measurable subsets H ⊆ of
hypothesis space (Theorems 5.2 and 5.4). More formally, we obtain the following.
Theorem 5.6 (PAC-Bayesian bounds with training errors) For any measure PH
and any measure PZ , for any δ ∈ (0, 1], with probability at least 1 − δ over the
170 Chapter 5
random draw of the training sample z ∈ m , for all hypotheses h ∈ such that
PH (h) > 0,
-
1 1 1
R [h] ≤ Remp [h, z] + ln + ln .
2m PH (h) δ
Clearly, even in the case of considering hypotheses which incur training errors,
it holds that the bound is smaller for the Gibbs classification strategy than for
any single hypothesis found by the MAP procedure. Moreover, the result on
the expected risk of the Gibbs classification strategy (or the Bayes classification
strategy when using Lemma 5.3) given in equation (5.8) defines an algorithm
which selects a subset H (z) ⊆
of hypothesis space
so as to minimize the
bound. Note that by the selection of a subset this procedure automatically defines
a principle for inferring a distribution PH|H∈H (z) over the hypothesis space which is
therefore called the PAC-Bayesian posterior.
Remark 5.7 (PAC-Bayesian posterior) The ideas outlined can be taken one step
further when considering not only subsets H (z) ⊆
of a hypothesis space but
1 m
whole measures QH|Z =z . In this case, for each test object x ∈ we must consider
a (Gibbs) classification strategy GibbsQH|Zm =z that draws a hypothesis h ∈
according to the measure QH|Zm =z and uses it for classification. Then, it is possible
to prove a result which bounds the expected risk of this Gibbs classification strategy
GibbsQH|Zm =z uniformly over all possible QH|Zm =z by
-
D QH|Zm =z PH + ln (m) + ln 1δ + 2
EQH|Zm =z Remp H, z + , (5.9)
2m − 1
1 With a slight abuse of notation, in this remark we use QH|Zm =z and qH|Zm =z to denote any measure and
density over the hypothesis space based on the training sample z ∈ m .
171 Bounds for Specific Algorithms
where2
H I
qH|Zm =z (H)
D QH|Zm =z PH = EQH|Zm =z ln
fH (H)
is known as the Kullback-Leibler divergence between QH|Zm =z and PH . Disregard-
ing the square root and setting 2m − 1 to m (both are due to the application of
Hoeffding’s inequality) we therefore have that the PAC-Bayesian posterior is ap-
proximately given by the measure QH|Zm =z which minimizes
D QH|Zm =z PH + ln (m) + ln 1δ + 2
EQH|Zm =z Remp H, z + . (5.10)
m
Whenever we consider the negative log-likelihood as a loss function,
1 m
1
Remp [h, z] = − ln PZ|H=h ((xi , yi )) = − ln PZm |H=h (z) ,
m i=1 m
this minimizer equals the Bayesian posterior due to the following argument:
For all training sample sizes m ∈ we have that
1
EQH|Zm =z Remp H, z = − EQH|Zm =z ln PZm |H=h (z) .
m
Dropping all terms which do not depend on QH|Zm =z , equation (5.10) can be
written as
H I H I
1 1 qH|Zm =z (H)
EQH|Zm =z ln + EQH|Zm =z ln
m PZm |H=h (z) fH (H)
H I
1 q H|Z =z
m (H)
= EQH|Zm =z ln
m PZm |H=h (z) fH (H)
H I
1 qH|Zm =z (H)
= EQH|Zm =z ln
m fH|Zm =z (H) PZm (z)
H I
1 qH|Zm =z (H)
= EQH|Zm =z ln − ln (PZm (z)) .
m fH|Zm =z (H)
This term is minimized if and only if qH|Zm =z (h) = fH|Zm =z (h) for all hypotheses
h ∈ . Thus, the PAC-Bayesian framework provides a theoretical justification
for the use of Bayes’ rule in the Bayesian approach to learning as well as a
2 Note that q and f denote the densities of the measures Q and P, respectively (see also page 331).
172 Chapter 5
Apart from building a theoretical basis for the Bayesian approach to learning,
the PAC-Bayesian results presented can also be used to obtain (training) data-
dependent bounds on the expected risk of single hypotheses h ∈ . One moti-
vation for doing so is their tightness, i.e., the complexity term − ln (PH (H (z)))
is vanishing in maximally “lucky” situations. We shall use the Bayes classification
strategy as yet another expression of the classification carried out by a single hy-
pothesis h ∈ . Clearly, this can be done as soon as we are sure that, for a given
subset H (h) ⊆ , Bayes H (h) behaves exactly the same as a single hypothesis
h ∈ on the whole space w.r.t. the loss function considered. More formally,
this is captured by the following definition.
For general hypothesis spaces and prior measures PH it is difficult to verify the
Bayes admissibility of a hypothesis. Nevertheless, for linear classifiers in some
feature space , i.e., x → sign (
x, w) where x = φ (x) and φ : → ⊆ n2
def
(see also Definition 2.2), we have the following geometrically plausible lemma.
Lemma 5.9 (Bayes admissibility for linear classifiers in feature space) For the
uniform measure PW over the unit hypersphere ⊂ ⊆ n2 each ball τ (w) =
{v ∈ | w − v < τ } ⊆ is Bayes admissible w.r.t. to its center
EW|W∈r (w) W
c= .
EW|W∈ (w) W
r
Proof The proof follows from the simple observation that the center of a ball is
always in the bigger half when bisected by a hyperplane.
173 Bounds for Specific Algorithms
Remarkably, in using a ball τ (w) rather than w to get a bound on the expected
risk R [h w ] of h w we make use of the fact that h w summarizes all its neighboring
classifiers h v ∈ V (z), v ∈ τ (w). This is somewhat related to the idea of a
covering already exploited in the course of the proof of Theorem 4.25: The cover
element fˆ ∈ Fγ (x) carries all information about the training error of all the
covered functions via its real-valued output referred to as the margin (see page
144 for more details).
In this section we apply the idea of Bayes admissibility w.r.t. the uniform
measure PW to linear classifiers, that is, we express a linear classifier x →
sign (
x, w) as a Bayes classification strategy Bayesτ (w) over a subset τ (w) of
version space V (z) such that PW (τ (W)) can be lower bounded solely in terms
of the margin. As already seen in the geometrical picture on page 57 we need to
normalize the geometrical margin γi (w) of a linear classifier h w by the length xi
of the ith training point in order to ensure that a ball of the resulting margin is
fully within version space V (z). Such a refined margin quantity z (w) offers the
advantage that no assumption about finite support of the input distribution PX needs
to be made.
The proof is given in Appendix C.8. The most appealing feature of this new margin
bound is, of course, that in the case of maximally large margins, i.e., z (w) = 1,
the first term vanishes and the bound reduces to
2 1
2 ln (m) + ln +2 .
m δ
174 Chapter 5
Here, the numerator grows logarithmically whilst the denominator grows linearly
hence giving a rapid decay to zero. Moreover, in the case of
-
1
z (w) > 2 exp − − exp (−1) ≈ 0.91
2
(
we enter a regime where − ln(1 − 1 − 2z (w)) < 12 and thus the troublesome
situation of d = m is compensated for by a large observed margin. The situation
d = m occurs if we use kernels which map the data into a high dimensional space
as with the RBF kernel (see Table (2.1)).
Example 5.11 (Normalizing data in feature space) Theorem 5.10 suggests the
following learning algorithm: Given a version space V (z) find the classifier w
that maximizes z (w). This algorithm, however, is given by the support vector
machine only if the training data in feature space are normalized. In Figure 5.1
we plotted the expected risks of support vector machine solutions (estimated over
100 different splits of the datasets3 thyroid (m = 140, m test = 75) and sonar
(m = 124, m test = 60)) with (dashed line) and without normalization (solid line)
as a function of the polynomial degree p of a complete polynomial kernel (see
Table 2.1). As suggested by Theorem 5.10 in almost all cases the normalization
improved the performance of the support vector machine solution at a statistically
significant level.
Remark 5.12 (Sufficient training sample size) It may seem that this bound on
the expected risk of linear hypotheses in terms of the margin is much tighter than
the PAC margin bound presented in Theorem 4.32 because its scaling behavior
as a function of the margin is exponentially better. Nevertheless, the current result
depends heavily on the dimensionality n ∈ of the feature space ⊆ n2 whereas
the result in Theorem 4.32 is independent of this number. This makes the current
result a practically relevant bound if the number n of dimensions of feature space
is much smaller than the training sample size. A challenging problem is to use
the idea of structural risk minimization. If we can map the training sample z ∈ m
in a low dimensional space and quantify the change in the margin solely in terms
of the number n of dimensions used and a training sample independent quantity,
then we can use the margin plus an effective small dimensionality of feature space
to tighten the bound on the expected risk of a single classifier.
3 These datasets are taken from the UCI Benchmark Repository found at http://www.ics.uci.edu/~mlearn.
175 Bounds for Specific Algorithms
0.22
0.070
generalisation error
generalisation error
0.20
0.060
0.18
0.050
0.16
0.040
10 20 30 40 10 20 30 40
p p
Figure 5.1 Expected risks of classifiers learned by a support vector machine with (solid
line) and without (dashed line) normalization of the feature vectors xi . The error bars
indicate one standard deviation over 100 random splits of the datasets. The plots are
obtained on the thyroid dataset (left) and the sonar dataset (right).
Remark 5.13 (“Risky” bounds) The way we incorporated prior knowledge into
this bound was minimal. In fact, by making the assumption of a uniform measure
PW on the surface of a sphere we have chosen the most uninformative prior pos-
sible. Therefore our result is solution independent; it is meaningless where (on the
unit sphere) the margin z (w) is observed. Remarkably, the PAC-Bayesian view
offers ways to construct “risky” bounds by putting much more prior probability on
a certain region of the hypotheses space . Moreover, we can incorporate unla-
beled data much more easily by carefully adjusting our prior PW .
So far we have have studied uniform bounds only; in the classical PAC and VC
framework we bounded the uniform convergence of training errors to expected
risks (see Section 4.2.1). In the luckiness framework we bounded the expected
risk uniformly over the (random) version space (see Theorem 4.19). In the PAC
Bayesian framework we studied bounds on the expected risk of the Gibbs classifi-
cation strategy uniformly over all subsets of hypothesis (version) space (Theorem
5.2 and 5.6), or possible posterior measures (equation (5.9)). We must recall, how-
ever, that these results are more than is needed. Ultimately we would like to bound
the generalization error of a given algorithm rather than proving uniform bounds
on the expected risk. In this section we will present such an analysis for algorithms
176 Chapter 5
that can be expressed as so-called compression schemes. The idea behind compres-
sion schemes stems from the information theoretical analysis of learning where the
action of a learning algorithm is viewed as summarization or compression of the
training sample z ∈ m into a single function. Since the uncertainty is only within
the m classes y ∈ m (given the m objects x ∈ m ) the protocol is as follows: The
learning algorithm gets to know the whole training sample z = (x, y) ∈ ( × )m
and must transfer d bits to a classification algorithm that already knows the m train-
ing objects x ∈ m . The requirement on the choice of d ∈ is that the classifi-
cation algorithm must be able to correctly classify the whole training sample by
just knowing the d bits and the objects x. If this is possible than the sequence y of
classes must contain some redundancies w.r.t. the classification algorithm’s ability
to reproduce classes, i.e., the hypothesis space ⊆ chosen. Intuitively, a small
compression coefficient d/m should imply a small expected risk of the classifica-
tion strategy parameterized by the d bits. This will be shown in the next subsec-
tion. In the subsequent subsection we apply the resulting compression bound to the
perceptron learning algorithm to prove the seemingly paradoxical result that there
exists an upper bound on its generalization error driven by the margin a support
vector machine would have achieved on the same training sample. This example
should be understood as an example of the practical power of the compression
framework rather than a negative result on the margin as a measure of the effective
complexity of single (real-valued) hypotheses.
In order to use the notion of compression schemes for bounds on the generaliza-
tion error R [, z] of a fixed learning algorithm : ∪∞ m=1
m
→ ⊆ we
are required to formally cast the latter into a compression framework. The learning
algorithm must be expressed as the composition of a compression and recon-
struction function. More formally this reads as follows:
Definition 5.14 (Compression scheme) Let the set Id,m ⊂ {1, . . . , m}d comprise
of all index vectors of size exactly d ∈ ,
Id,m = (i 1 , . . . , i d ) ∈ {1, . . . , m}d | i 1 = · · · = i d .
Given a training sample z ∈ m and an index vector i ∈ Id,m , let z i be the
subsequence indexed by i,
def
z i = z i1 , . . . , z id .
177 Bounds for Specific Algorithms
Example 5.16 (Support vector learning) In order to see that support vector
learning fits into the compression framework we notice that, due to the station-
ary conditions, at the solutions α̂ ∈ Êm , ξ̂ ∈ Ê m to the mathematical programs
presented in Section B.5
! "
∀i ∈ {1, . . . , m} : α̂i yi xi , ŵ − 1 + ξ̂i = 0 . (5.14)
(5.14), the left-out training examples must have had expansion coefficients of zero.
Further, the ordering of z i is irrelevant. As a consequence, the support vector
learning algorithm is a permutation invariant compression scheme.
It is interesting to note that the relevance vector machine algorithm (see Section
3.3) is not expressible as a compression scheme. Consider that we conduct a first
run to select the training examples which have non-zero expansion coefficients in
the final expansion. A rerun on this smaller subset of the training sample would not
obtain the same classifier because the computation of the few nonzero expansion
coefficients αi uses all the m classes y ∈ m and examples x ∈ m given (see
Algorithm 7).
In the following we confine ourselves to the zero-one loss l0−1 ŷ, y = I ŷ= y .
As mentioned earlier this is not a severe restriction and can be overcome by using
different large deviation bounds (see Subsection A.5.2). Let us start with the simple
PAC case, that is, we assume that there exists a hypothesis h ∗ ∈ such that
PY|X=x (y) = Ih ∗ (x)=y . Then, for a given compression scheme of size d ≤ m we
will bound the probability of having training samples z ∈ m such that the training
error Remp [Êd (z d (z) ), z] = 0 but the expected risk R[Êd (z d (z) )] of the function
learned is greater than ε. This probability can be upper bounded by the sum of the
probabilities that the reconstruction function Ê (z i ) returns a hypothesis with this
property over the choice of i ∈ Id,m , i.e.,
PZm Remp Êd Zd (Z) , Z = 0 ∧ R Êd Zd (Z) > ε
≤ PZm ∃i ∈ Id,m : Remp Êd (Zi ) , Z = 0 ∧ R Êd (Zi ) > ε
≤ PZm Remp Êd (Zi ) , Z = 0 ∧ R Êd (Zi ) > ε . (5.15)
i∈Id,m
different index vectors i ∈ Id,m equals4 md d! which finally gives that the proba-
bility in (5.15) is strictly less than md d! (1 − ε)m−d . This statement is equivalent
to the following assertion ϒi (z, m, δ) that holds with probability at least 1 − δ
over the random draw of the training sample z ∈ m for all compression schemes
(i , i ) of size i
ln mi i! + ln 1δ
Remp [i (z, i (z)) , z] = 0 ∨ R [i (z, i (z))] ≤ .
m −i
Using Lemma 4.14 with uniform PS over the numbers i ∈ {1, . . . , m} we have
proven the following theorem.
Theorem 5.17 (PAC compression bound) Suppose we are given a fixed learning
algorithm : ∪∞ m=1
m
→
⊆ which is a compression scheme. For any
probability measure PZ and any δ ∈ (0, 1], with probability at least 1 − δ over
the random draw of the training sample z ∈ m , if Remp [ (z) , z] = 0 and (z)
corresponds to a compression scheme of size d, the expected risk R [ (z)] of the
function (z) ∈ is bounded from above by
1 m 1
R [ (z)] ≤ ln d! + ln (m) + ln .
m−d d δ
Furthermore, if is a permutation invariant compression scheme, then
1 m 1
R [ (z)] ≤ ln + ln (m) + ln . (5.16)
m−d d δ
In order to understand the full power of this theorem we note that according to
d
Theorem A.105 for all d ∈ {1, . . . , m}, md < di=0 mi < em d
which shows
that for permutation invariant compression schemes the generalization error bound
(5.16) can be written as5
! em "
2 1
R [ (z)] ≤ d ln + ln (m) + ln .
m d δ
Disregarding the improved constants, this is the same bound as obtained in the
PAC framework (see equation (4.21) and (4.19)) with the important difference
that the number d of examples used is not known a-priori but depends on the
4 Note that in the case of permutation invariant compression schemes the factor of d! vanishes.
5 Note that this result is trivially true for d > m/2; in the other case we used 1/ (m − d) ≤ 2/m.
180 Chapter 5
Again, for any i ∈ Id,m we know that if Êd (z i ) commits no more than q errors
on z, then the number of errors committed on the subset (z \ z i ) ∈ m−d cannot
exceed q. Hence, any summand in the last expression is upper bounded by
H I
q
EZd PZm−d |Zd =z Remp Êd (z) , Z ≤ m − d
∧ (R [Êd (z)] > ε) .
application of the union bound over all the d d! different index vectors i ∈ Id,m
we conclude that the following statement ϒi,q (z, m, δ) holds, with probability at
least 1 − δ over the random draw of the training sample z ∈ m , for all lossy
compression schemes of size i and maximal number of training errors q
- m 1
! q" q ln d d! + ln δ
Remp [h z , z] > ∨ R [h z ] ≤ + ,
m m−d 2 (m − d)
181 Bounds for Specific Algorithms
statements for all the possible values of i ∈ {1, . . . , m} and q ∈ {1, . . . , m} and
using Lemma 4.14 with uniform PS we have proven the following theorem.
Theorem 5.18 (Lossy compression bound) Suppose we are given a fixed learn-
ing algorithm : ∪∞ m=1 →
m
⊆ which is a compression scheme. For any
probability measure PZ and any δ ∈ (0, 1], with probability at least 1 − δ over the
random draw of the training sample z ∈ m , if (z) corresponds to a compression
scheme of size d, the expected risk R [ (z)] of the function (z) ∈ is bounded
from above by
-
ln md d! + 2 ln (m) + ln 1δ
R [ (z)] ≤ Remp [ (z) , z] +
m
.
m−d 2 (m − d)
Furthermore, if is a permutation invariant compression scheme, then
-
ln md + 2 ln (m) + ln 1δ
R [ (z)] ≤ Remp [ (z) , z] +
m
.
m−d 2 (m − d)
This result and Theorem 5.17 constitute the basic results of the compression frame-
work. One of the most intriguing features of these inequalities is that, regardless
of any a-priori complexity measure (e.g., VC dimension ϑ or the size | | of the
hypothesis space ), they will always attain nontrivial values, provided that the
number d of training examples used is at least as small as half the training sample
size. To some extent, this is similar reasoning to that used in the luckiness frame-
work. The difference, however, is that in the current framework we have considered
what we are actually interested in—the expected risk R [ (z)] of the hypothe-
sis (z) ∈ learned—rather than providing uniform bounds over version space
V (z) which introduce additional technical difficulties such as probable smooth-
ness (see Definition 4.18).
Remark 5.19 (Ghost sample) There exists an interesting relationship between the
technique of symmetrization by a ghost sample (see page 124) used in the PAC/VC
framework and the compression framework. Since we consider only the expected
risk of the hypothesis learned by a fixed learning algorithm and assume that this
hypothesis can be reconstructed from d % m training examples, the remaining m −
d training examples constitute a ghost sample on which the hypothesis succeeds
(lossless compression) or commits a small number q of errors (lossy compression).
182 Chapter 5
training examples. Note that it is possible to present the same training example
(xi , yi ) ∈ z several times.
mistakes, i.e.,
6 Here we assume that, in a given ordering, all indices to removed examples have been dropped.
184 Chapter 5
Example 5.24 (Perceptron learning algorithm) Let us consider again the per-
ceptron learning algorithm given at page 321. This algorithm is by definition a
mistake-driven algorithm having a mistake bound
maxxi ∈x φ (xi ) maxxi ∈x φ (xi ) 2
M (z) = max =
w∈Ï γ z (w) γ z (wSVM )
of the objects x ∈ into a feature space (see also Definition 2.2). Remarkably,
this mistake bound is dominated by the margin a support vector machine would
have achieved on the same training sample z. Substituting this result directly into
equation (5.17) shows that we can give a tighter generalization error bound for
the perceptron learning algorithm by studying its properties than for the support
vector machine algorithm when using the uniform bounds presented in the last
chapter (see Section 4.4 and Theorem 4.32).
Example 5.25 (Halving algorithm) For finite hypothesis spaces , there exists a
mistake-driven learning algorithm which achieves a minimal mistake bound. This
on-line learning algorithm is called the halving algorithm and proceeds as follows:
In this last section we present a very recently developed method for studying
the generalization error of learning algorithms. In contrast to the compression
framework we now do not need to enforce the existence of compression and
reconstruction functions. Instead, we take advantage of the robustness of a learning
algorithm. The robustness of a learning algorithm is a measure of the influence
of an additional training example (x̃, ỹ) ∈ on the learned hypothesis (z) ∈ .
Here, the influence is quantified in terms of the loss achieved at any (potential) test
object x ∈ . We observe that a robust learning algorithm guarantees that both the
difference in expected risks and empirical risks of the function learned is bounded
even if we replace one training example by its worst counterpart. This observation
is of great help when using McDiarmid’s inequality given in Theorem A.119—
a large deviation result perfectly suited for the current purpose. This inequality
bounds the probability that a function of the training sample z ∈ m (the difference
R [ (z)] − Remp [ (z) , z] of the expected and empirical risk of the function
learned from the training sample z) deviates from its expected value in terms of
the maximum deviation between the function’s value before and after one example
is changed. In fact, the definition of robustness of a learning algorithm is mainly
chosen so as to be able to apply this powerful inequality to our current problem.
Because of its simplicity we shall start with the regression estimation case, that is,
we consider a training sample z = (x, t) ∈ ( × Ê ) m drawn iid from an unknown
distribution PZ = PT|X PX . In this case the hypotheses are given by real-valued
functions f ∈
where
⊆ Ê . Further, the loss function l : Ê × Ê → Ê
becomes a function of predicted real values tˆ and observed real values t (see, for
example the squared loss defined on page 82). Before proceeding we introduce
some abbreviations for the sake of notational simplicity. Given a sample z ∈ m ,
a natural number i ∈ {1, . . . , m} and an example z ∈ let
z \i
def
= (z 1 , . . . , z i−1 , z i+1 , . . . , z m ) ∈ m−1 ,
186 Chapter 5
= (z 1 , . . . , z i−1 , z, z i+1 , . . . , z m ) ∈ m ,
def
z i↔z
be the sample with the ith element deleted or the ith element replaced by z,
def
respectively. Whenever the learning algorithm is clear from context, we use f z =
(z). Then the notion of robustness of a learning algorithm is formally defined as
follows.
Proof First, we notice that that l ( f z (x) , t) − l f zi↔z̃ (x) , t equals
l ( f z (x) , t) − l f z\i (x) , t + l f z\i (x) , t − l f zi↔z̃ (x) , t .
a b
From this, the result follows by the triangle inequality applied to a and b and the
fact that the absolute value of a and b is by definition upper bounded by βm .
Note that the value of βm depends on the training sample size m, so, for larger
training samples the influence of a single example (x, t) ∈ should be decreasing
toward zero. We will call an algorithm “stable” if the decrease in βm is of order
one, limm→∞ βm · m −1 = 0. In order to compute values of βm for a rather large
class of learning algorithms it is useful to introduce the following concept.
187 Bounds for Specific Algorithms
Example 5.29 (Soft margin loss) If we consider the linear soft margin
loss func-
tion given in equation (2.47), namely llin tˆ, y = max 1 − y tˆ, 0 where y ∈
{−1, +1}, we see that
llin tˆ, y − llin t˜, y
≤
y t˜ − y tˆ
=
y t˜ − tˆ
=
tˆ − t˜
.
This shows that llin is Lipschitz continuous with the Lipschitz constant Cllin = 1.
Using the concept of Lipschitz continuous loss functions we can upper bound the
value of βm for a rather large class of learning algorithms using the following
theorem (see also Subsection 2.2.2).
1
(z) = argmin l ( f (xi ) , ti ) + λ f 2 , (5.19)
f ∈ m (x ,t )∈z
i i
188 Chapter 5
Cl2 κ 2
βm ≤ ,
2λm
where κ = supx∈ k (x, x). Note that, in this formulation, the value m is fixed for
any training sample z.
The proof of this result is given in Appendix C.9. By the generality of expression
(5.19) it is possible to cast most of the learning algorithms presented in Part I of
this book into this framework. Now, in order to obtain generalization error bounds
for βm –stable learning algorithms we proceed as follows.
In Appendix C.9 we have carried out these steps to obtain generalization error
bounds both in terms of the training error as well as of the leave-one-out error.
This is summarized in the following theorem.
At first we note that these two bounds are essentially the same, i.e., the additive
correction is ≈ βm and the decay of the probability is (exp(−ε 2 /mβm2 )). This
comes as a slight surprise as VC theory appears to indicate that the training er-
ror Remp is only a good indicator of the generalization error of an algorithm when
the hypothesis space is of small VC dimension (see Theorem 4.7). In contrast the
leave-one-out error disregards VC dimension and is an almost unbiased estima-
tor of the expected generalization error of an algorithm (see Theorem 2.36). We
must recall, however, that VC theory is used in the study of empirical risk mini-
mization algorithms which only consider the training error as the cost function to
be minimized. In contrast, in the current formulation we have to guarantee a cer-
tain stability of the learning algorithm. In particular, when considering the result
of Theorem 5.31 we see that, in the case of λ → 0, that is, the learning algorithm
minimizes the empirical risk only, we can no longer guarantee a finite stability. In
light of this fact, let us consider βm –stable algorithms such that βm ≤ ηm −1 ,
i.e., the influence of a single new training example is inversely proportional to the
training sample size m with a decay of η ∈ Ê + . With this the first inequality in
Theorem 5.32 states that, with probability at least 1 − δ over the random draw of
the training sample z ∈ m ,
-
2 (4η + b)2 ln 1δ
R [ (z)] ≤ Remp [ (z) , z] +
2η
+ .
m m
√
This is an amazingly tight generalization error bound whenever η % m because
the expression is dominated by the second term. Moreover, this result provides
us with practical guidelines on the possible values of the trade-off parameter λ.
Since for regularized risk minimization algorithms of the form (5.19) we know that
C 2κ 2 C 2κ2
η ≤ 2λl
, it follows that λ ≥ bm
l
because otherwise the bound would be trivial (as
large as b) regardless of the empirical term Remp [ (z) , z]. Before we proceed to
the classification learning case we show an application of this new generalization
error bound for a stable regression estimation algorithm presented in Part I.
190 Chapter 5
1.0
τ=2
0.8
clipped loss
τ=1
0.6
0.4
0.2
τ=0.5
0.0
Figure 5.2 The clipped linear soft margin loss lτ for various values of τ > 0. Note that
for τ → 0 the loss function approaches the zero-one loss I−y f (x)≥0 .
although the following reasoning also applies to any loss that takes only a finite
set of values. Similarly to the results presented in the last subsection we would
like to determine the βm –stability of a given classification learning algorithm
: ∪∞m=1 m → . It turns out, however, that the only two possible values of
βm are 0 and 1. The former case occurs if, for all training samples z ∈ m and all
test examples (x, y) ∈ ,
I(z)(x)= y − I( z\i )(x)= y
= 0 ,
which is only possible if only contains one hypothesis. If we exclude this trivial
case from our considerations then we see that Theorem 5.32 only gives trivial
results for classification learning algorithms. This is mainly due to the coarseness
of the loss function l0−1 .
In order to circumvent this problem we shall exploit the real-valued out-
put f (x) when considering classifiers of the form h (·) = sign ( f (·)). Since
our ultimate interest is in the generalization error R [h] = EXY Ih(X)=Y =
EXY IY· f (X)≤0 we will consider a loss function lτ : Ê ×
→ [0, 1] which is
an upper bound of the function
I y f (x)≤0 . To see the advantage
of such a loss func-
tion note that l0−1 ŷ, y ≤ lτ (t, y) implies that EXY l0−1 (sign ( f (X)) , Y) ≤
192 Chapter 5
EXY lτ ( f (X) , Y) . Another useful requirement on the refined loss function lτ is
Lipschitz continuity with a small Lipschitz constant. This can be achieved by a
slight refinement of the linear soft margin loss llin considered in Example 5.29. The
generalization is obtained by requiring a real-valued output of at least τ on the cor-
rect side. Since the loss function has to pass through 1 for f (x) = 0 it follows that
the steepness of the function is 1/τ , giving the Lipschitz constant as 1/τ . Finally
we note that lτ should always be in the interval [0, 1] because the zero-one loss
l0−1 will never exceed 1. Hence, we obtain the following version of the linear soft
margin loss which will serve our needs (see also Figure 5.2)
0 if yt > 1
lτ (t, y) = 1 − ytτ if yt ∈ [0, τ ] . (5.20)
1 if yt < 0
A direct application of Theorem 5.32 to the expected and empirical risks using
the loss function lτ yields an algorithmic stability result for classification learning
algorithms which use a thresholded real-valued function for classification.
Again, we have the intuitive interpretation that, for larger values of τ , the term
τ
Remp [ f, z] = m1 mi=1 lτ ( f (x i ) , yi ) is provably non-increasing whereas the term
κ2
λmτ 2
is always increasing. It is worth considering this theorem for the special
case of linear soft margin support vector machines for classification learning (see
Subsection 2.4.2). Without loss of generality let us assume that κ 2 = 1as for
RBF kernels and normalized kernels (see Table 2.1). Noticing that the sum m i=1 ξ̂i
of the slacks ξ̂ ∈ Ê at the solution upper bounds m · Remp [ (z) , z] we see
m 1
193 Bounds for Specific Algorithms
that the linear soft margin algorithm SVC presented in Subsection 2.4.2 has a
generalization error bound w.r.t. the zero-one loss l0−1 of
-
(1 + λ)2 ln 1δ
R sign (SVC (z)) ≤
1 1
ξ̂ 1 + +2 .
m λ λ2 m
This bound provides an interesting model selection criterion for linear soft margin
support vector machines. The model selection problem we considered here is the
selection of the appropriate value of λ—the assumed noise level. In contrast to the
results of Subsection 4.4.3 this bound only holds for the linear soft margin support
vector machine and can thus be considered practically useful. This, however,
remains to be shown empirically. The results in this section are so recent that no
empirical studies have yet been carried out.
encoding is that it allows us to get the PAC like results we sought...”. In contrast to
their results—which hold for single classifiers drawn according to the posterior
measure—McAllester (1998) considered classification strategies which allowed
him to tighten the results and ease their proofs. Theorems 5.1, 5.2 and 5.6 can
be found in this paper; the more general result given in equation (5.9) together
with some remarks on how to generalize the framework to arbitrary loss functions
can be found in McAllester (1999). The simple relationship between the expected
risk of the Gibbs and the Bayes classification strategies (Theorem 5.7) is taken
from Herbrich and Graepel (2001b). The full power of the bound for the Bayesian
classifier can be exploited by making use of the fact that for “benign” hypothesis
spaces the expected risk of one classifier can be expressed as the generalization
error of a subset of classifiers. This analysis, together with the final PAC-Bayesian
margin bound (Theorem 5.10) can be found in Herbrich and Graepel (2001b). Re-
cently, it has been shown that not only the evidence can be justified in a distribution
free framework, but also the estimated posterior probability PH|Zm =z (H (x) = y)
leads to a decrease in expected risk when used as a rejection criterion (see Freund
et al. (2000)). In contrast to the bounds in the PAC-Bayesian framework, this paper
studies only the generalization error of their (pseudo)-Bayesian prediction method
which results in remarkably tight bounds. A work preceeding Shawe-Taylor and
Williamson (1997, p. 4) is by Haussler et al. (1994) where it was assumed that
PH is known to the learning algorithm and corresponds to the probability of target
concepts. Rather than studying the performance of Bayesian classification strate-
gies for a fixed, but unknown, data distribution PZ it was assumed that the prior
belief PH is used to govern PY|X=x . It was shown that the average generalization
error of classification strategies over PH can be arbitrarily bad without assuming
that the learning algorithm uses the same PH . It should be noted, however, that this
quantity does not satisfy the PAC desiderata of not knowing the data distribution.
In the following section we introduced the notion of compression schemes.
One of the earliest works in that area is by Littlestone and Warmuth (1986) which
was summarized and extended to on-line learning algorithms in Floyd and War-
muth (1995). Theorem 5.17 is taken from this paper; the lossy compression scheme
bound (Theorem 5.18) was proven in Graepel et al. (2000); see also Marchand and
Shawe-Taylor (2001) for a result that avoids the exponent two at the deviation ε.
Interestingly, all these results can be extended further by allowing the learning algo-
rithm to save an additional b bits which would only incur an additional summand
of mb in the resulting generalization error bound. An interesting combination of
large margins of the linear classifier learned by an algorithm and sparsity w.r.t. the
expansion coefficients is presented in Herbrich et al. (2000). The subsection on the
195 Bounds for Specific Algorithms
The purpose of this appendix is twofold: On the one hand, it should serve as
a reference for the case that we need more exactness in the reasoning. On the
other hand, it gives brief introductions to probability theory, functional analysis
and ill-posed problems. The section on probability theory is based on Feller (1966,
Chapter 4) and Kockelkorn (2000). The following section about functional analysis
is compiled from Barner and Flohr (1989), Cristianini and Shawe-Taylor (2000)
and Debnath and Mikusinski (1998). The section about ill-posed problems is taken
from Tikhonov and Arsenin (1977). Finally, we present a set of inequalities needed
for the derivation of some of the results in the book.
A.1 Notation
In general, sets are denoted by roman upper capital letters, e.g., X , whilst elements
are denoted by roman lower capital letters, e.g., x. For sets the indicator function
I X is defined by
.
def 0 if x ∈
/ X
I X (x) = .
1 if x ∈ X
Definition A.2 (Borel sets) Given = Ên , the Borel sets n are the smallest
σ –algebra that contains all open intervals
(x1 , . . . , xn ) ∈ Ê n | ∀i ∈ {1, . . . , n} : xi ∈ (ai , bi )
In order to distinguish random variables from ordinary functions we also use sans
serif letters to denote them, e.g., Y = f (X). Thus a random variable Y = f (X)
induces a measure PY which acts on the real line, i.e., = Ê and for which the
σ –algebra contains at least the intervals {(−∞, z] | z ∈ Ê }. The measure PY is
induced by the measure PX and f , i.e.,
∀Y ∈ 1 : PY (Y ) = PX ({x ∈ | f (x) ∈ Y }) .
def
Definition A.6 (Distribution function and density) For a random variable X the
function FX : Ê → [0, 1] defined by
def
FX (x) = PX (X ≤ x)
is called the distribution function of X. The function fX : Ê → Ê is called the
density if
&
∀z ∈ Ê : FX (z) = fX (x) dx .
x≤z
Definition A.8 (Variance) The variance Var (X) of a random variable X is defined
by
0 1
def
Var (X) = EX (X − µ)2 = EX X2 − µ2 ,
where µ = EX X is the expectation of the random variable X.
Definition A.10 (Marginal and conditional measure) Given the joint probability
space ( × , × , PXY ), the marginal probability measure PX is defined by
∀X ∈ : PX (X ) = PXY (X × ) .
def
which should be clear from the context. A particular element of the sample space
n is then denoted by the n–tuple x. Given an n–tuple x = (x1 , . . . , xn ), the
abbreviation x ∈ x should be understood as ∃i ∈ {1, . . . , n} : xi = x.
Definition A.13 (Covariance and covariance matrix) Given two random vari-
ables X and Y with a joint measure PXY , the covariance Cov (X, Y) is defined
by
def
Cov (X, Y) = EXY (X − µ) (Y − ν) ,
where µ = EX X and ν = EY Y . Note that Cov (X, X) = Var (X). Given n
random variables X = (X1 , . . . , Xn ) and m random variables Y = (Y1 , . . . , Ym )
having a joint measure PXY , the n × m covariance matrix Cov (X, Y) is defined by
Cov (X1 , Y1 ) · · · Cov (X1 , Ym )
def .. .. ..
Cov (X, Y) = . . . .
Cov (Xn , Y1 ) · · · Cov (Xn , Ym )
def
If X = Y we abbreviate Cov (X, X) = Cov (X).
In this subsection we will present some results for the expectation and variance of
sums and products of random variables. These will prove to be useful for most of
Chapter 3.
204 Appendix A
Theorem A.15 (Expectation of sum and products) Given two independent ran-
dom variables X and Y
EXY X · Y = EX X · EY Y , (A.1)
EXY X + Y = EX X + EY Y . (A.2)
whenever the two terms on the r.h.s. exist. Note that statement (A.2) is also true if
X and Y are not independent.
Corollary A.19 (Variance scaling) For any random variable X and any c ∈ Ê we
have Var (cX) = c2 · Var (X).
From the definition of conditional and marginal measures, we have the following
important theorem.
Theorem A.22 (Bayes’ theorem (Bayes 1763)) Given the joint probability space
( × , × , PXY ) then, for all X ∈ , PX (X ) > 0 and Y ∈ , PY (Y ) > 0
PY|X∈X (Y ) PX (X )
PX|Y∈Y (X ) = .
PY (Y )
If both X and Y posses densities fX and fY the theorem reads as follows
fY|X=x (y) fX (x)
∀y ∈ Ê : ∀x ∈ Ê : fX|Y=y (x) = .
fY (y)
Theorem A.23 (Scheffé’s theorem) For all densities fX and fY on the measurable
space (Ê n , n )
&
fX (x) − fY (x)
d x = 2 sup |PX ( A) − PY ( A)| . (A.3)
Ên A∈ n
Proof Choose C = x ∈ Ên
fX (x) > fY (x) ∈ n and C c = Ên \C ∈ .
n
Then, for the 1 distance between fX and fY ,
&
fX (x) − fY (x)
d x
Ên & &
=
fX (x) − fY (x) d x +
fX (x) − fY (x)
d x
c
&C &C
= fX (x) − fY (x) d x + fY (x) − fX (x) d x
C Cc
= PX (C) − PY (C) + (1 − PY (C)) − (1 − PX (C))
= 2 (PX (C) − PY (C)) . (A.4)
& &
|PX ( A) − PY ( A)| =
fX (x) − fY (x) d x + fX (x) − fY (x) d x
A∩C c
A∩C
& &
=
fX (x) − fY (x) d x − fY (x) − fX (x) d x
A∩C
A∩C c
≥0 ≥0
& &
≤ max fX (x) − fY (x) d x, fY (x) − fX (x) d x
A∩C A∩C c
& &
≤ max fX (x) − fY (x) d x, fY (x) − fX (x) d x
C Cc
= PX (C) − PY (C) ,
where the second line follows by definition of C and the third line is always true
because ∀a > 0, b > 0 : |a − b| ≤ max (a, b). We have shown that the supremum
in (A.3) is attained at C ∈ n and thus equation (A.4) proves the theorem.
207 Theoretical Background and Basic Inequalities
Ü
º
Ü
1111111111111111
0000000000000000
0000000000000000
1111111111111111
1111111111111111
0000000000000000
0000000000000000
1111111111111111
0000000000000000
1111111111111111
0000000000000000
1111111111111111
0000000000000000
1111111111111111 Ü
Ê
Figure
'
A.1 Geometrical
proof of Scheffé’s theorem for Ê1 . The quantity
Ên fX (x) − fY (x) d x is given
by the
sum of the two shaded areas excluding the striped
area A. Given the set C = x ∈ Ê
fX (x) > fY (x) the quantity PX (C) − PY (C) is
given by the light shaded area only. Since the area under both curves fX and fY is exactly
one it must hold that PX (C) − PY (C) = PY (C c ) − PX (C c ) because we subtract A from
both curves. This proves Scheffé’s theorem.
For the following measures we assume that the sample space is the set of all
natural numbers (including 0) and the σ –algebra is the collection of all subsets
of . In Table A.1 we have summarized the most commonly used probability
measures
on natural numbers. Note that, for the binomial distribution, we assumed
that ni = 0 whenever i > n.
The Bernoulli distribution is used to model the outcome of a coin toss with a
chance of p for “heads”; 1 is used to indicate “head”. The binomial distribution
models the outcome of i “heads” in n independent tosses of a coin with a chance
of p for “heads”. The Poisson distribution is the limiting case of the Binomial
distribution if the number of tosses tend to infinity but the expectation value np = λ
remains constant.
208 Appendix A
Name Probability measure / density EX X Var (X)
Bernoulli ( p) PX (1) = 1 − PX (0) = p p p (1 − p)
Binomial (n, p) PX (i ) = ni p i (1 − p)n−i np np (1 − p)
λi
Poisson (λ) PX (i ) = i! exp (−λ) λ λ
2
Uniform (A) PX (i ) = 1
|A| Ii∈A ! A A2 − A
"
√ 1 exp − (x−µ)
2
Normal µ, σ 2 fX (x) = 2σ 2 µ σ2
2πσ
Exp (λ) fX (x) = λ exp (−λx) Ix≥0 1 1
α−1
! " λ λ2
Gamma (α, β) fX (x) = βxα (α) exp − βx Ix>0 αβ αβ 2
(α+β)x α−1 (1−x)β−1 α αβ
Beta (α, β) fX (x) = (α)(β) Ix∈[0,1] α+β (α+β)2 (α+β+1)
(b−a)2
Uniform ([a, b]) fX (x) = 1
b−a I x∈[a,b]
a+b
2 12
Table A.1 Summary of measures over the natural ' ∞ numbers Æ (first four rows) and the
real line Ê1 (last five rows). Note that (α) = 0 t α−1 exp (−t) dt denotes the Gamma
function. For plots of these distributions see page 209 and 210. Furthermore, the symbols
1 1 2
A and A2 denote |A| ai and |A| ai , respectively.
ai ∈A ai ∈A
For the following measures we assume that the sample space is the real line
Ê and the σ –algebra is the Borel sets 1 (see Definition A.2). In Table A.1
we summarized commonly used probability measures on Ê1 by specifying their
density function. For a comprehensive overview of measures on Ê 1 see (Johnson
et al. 1994).
Note that the exponential distribution Exp (λ) is a special case of the Gamma
distribution because Gamma (1, β) = Exp β −1 . The Beta distribution is the
conjugate prior distribution of the success probability in a Binomial measure (see
also Section 3.1). Finally, the normal or Gaussian distribution owes its importance
to the well known central limit theorem.
p=0.2 p=0.2
0.25
0.8
p=0.5 p=0.5
p=0.9 p=0.9
0.20
0.6
P(X=i)
P(X=i)
0.15
0.4
0.10
0.2
0.05
0.00
0.0
0 1 0 2 4 6 8 10 12 14 16 18 20
i i
0.5
λ=1 b=1
λ=2 b=5
λ=10
0.30
b=20
0.4
0.3
P(X=i)
P(X=i)
0.20
0.2
0.10
0.1
0.00
0.0
0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20
i i
Figure A.2 The probability mass function PX for the measures in Table A.1. (Left)
Bernoulli and Poisson measure. (Right) Binomial and uniform measure. In the uniform
measure plot, we consider the sets Ab = {0, . . . , b}.
Probability Measure on Ên
µ=1, σ=0.5
0.8
1.0
0.8
0.6
λ=2
0.6
f(x)
f(x)
µ=0, σ=1
0.4
0.4
0.2
0.2
µ=0, σ=2 λ=1
λ=10
0.0
0.0
−4 −2 0 2 4 0 1 2 3 4 5
x x
2.5
α=2, β=0.1
2.0
3
1.5
f(x)
f(x)
α=5, β=0.1
2
α=1, β=1
1.0
1
α=0.5, β=0.5
0.5
α=1, β=0.25
0.0
0
0.0 0.2 0.4 0.6 0.8 1.0 1.2 0.0 0.2 0.4 0.6 0.8 1.0
x x
Figure A.3 The densities fX for the measures in Table A.1. (Left) Densities of the
Gaussian/normal and the Gamma measure. (Right) Densities of the exponential and Beta
measure.
0.15
0.3
density
0.10
0.2
density
0.05
0.1
0.0
−3 −3
−2 3 −2 3
−1 2 −1 2
1 1
0 0
0 0
x
x
1 1
−1 y −1 y
2 2
−2 −2
3 −3 3 −3
Figure A.4 Density of the multidimensional normal distribution. (Left) Here, we used
= I and µ = 0. (Right) Density obtained from a transformation of the left density such
that Cov (X, Y) = 0.5 and µ = 0.
Proof The theorem follows directly from Corollary A.16 and A.21.
First note that the denominator is independent of y. Thus let us start with the
numerator of (A.9). Using Definition A.26 we have that the latter is given by
1
c · exp − (x − X y) −1 (x − X y) + ( y − µ) −1 ( y − µ) ,
2
PV = Normal µV , V V , (A.11)
−1
−1
PU|V=v = Normal µU + U V V V v − µV , U U − U V V V V U , (A.12)
PV|U=u = Normal µV + V U U−1U u − µU , V V − V U U−1U U V ,(A.13)
where
µU U U U V
µ= , = .
µV VU V V
Proof The assertions (A.10) and (A.11) follow directly from Theorem A.27 con-
sidering that
Ir 0 0 0
U= X, V = X.
0 0 0 Is
We shall prove equation (A.12) for the special case of µ = 0 only—the full result
follows from Theorem A.27. First we exploit the fact that
fX ((u; v))
fU|V=v (u) = . (A.14)
fV (v)
Since we know the density fV already let us consider the joint density fX as a
function of u and v. To this end we use equation (A.27) of Theorem A.80 to obtain
a partitioned expression for the inverse −1 of the covariance matrix , i.e.,
A B
−1 = ,
B D
where the matrices A, B and D are given by
−1
A = U U − U V −1 V V VU ,
B = −AU V −1
VV ,
D = −1 −1 −1
V V + V V V U A U V V V .
Now we can write the joint density fX ((u; v)) as a function of u and v
1 −1 1
c · exp − u , v (u; v) = c · exp − u Au + 2u Bv + v Dv
2 2
1 ! "
= c · exp − u + A−1 Bv A u + A−1 Bv + v D − B A−1 B v ,
2
214 Appendix A
using the constant c = (2π )− 2 ||− 2 . The last line can be proven by expanding it
n 1
and making a term-wise comparison with the second line. Note that D − B A−1 B =
−1 −1
V V , which follows from applying equation (A.27) to . Finally, using (A.14)
shows that the conditional density of U given v ∈ Ê s is again normal with mean
−A−1 Bv = U V −1V V v and covariance matrix A
−1
= U U − U V −1
V V V U . The
proof of equation (A.13) is analogous.
Exponential Family
All of the above measures belong to the class of measures in the exponential family.
Let us start by defining formally the exponential family.
In Table A.2 we have given the functions τ0 and τ together with the normalization
constant a0 (θ) for all the one-dimensional measures introduced. In the case of X ∼
Normal (µ, ) where µ ∈ Ên , a straightforward manipulation of the definition
given in equation (A.6) shows that
−1 −1 −1 −1 −1
θ = −1 µ; − 11 ; −12 ; . . . ; − 22 ; −23 ; . . . ; − nn , (A.16)
2 2 2
τ (x) = x; x12 ; x1 x2 ; . . . ; x1 xn ; x22 ; x2 x3 ; . . . ; xn2 ,
τ0 (x) = 1 ,
215 Theoretical Background and Basic Inequalities
In this section we introduce the basic terms of functional analysis together with
some examples. This section is followed by a more detailed subsection about
matrix algebra together with some useful matrix identities. For a more detailed
treatment of matrices the interested reader is referred to (Harville 1997; Lütkepohl
1996).
Definition A.31 (Vector space) A set3 is a vector space if addition and multi-
plication by scalar are defined such that, for x, y ∈ , and c ∈ Ê ,
x + y ∈ , cx ∈ , 1x = x , 0x = 0 .
3 The notational similarity of as a vector space as well as the sample space (in probability theory) is intended
to indicate their similar roles in the two fields.
216 Appendix A
x + y = y + x,
(x + y) + z = x + (y + z) ,
∃0 ∈ : x+ 0 = 0,
∃−x∈ : x + (−x) = 0 ,
c (x + y) = cx + cy , (c + d) x = cx + dx .
Definition A.34 (np and L p ) Given a subset X ⊆ , the space L p (X ) is the space
of all functions f : X → Ê such that
&
| f (x)| p dx < ∞ if p < ∞ ,
X
sup | f (x)| < ∞ if p = ∞.
x∈X
217 Theoretical Background and Basic Inequalities
Definition A.36 (Balls in normed spaces) Given a normed space , the open ball
τ (x) ⊆ of radius τ around x ∈ is defined by
τ (x) def
= {y ∈ | x − y < τ } .
Equivalently, the closed ball τ (x) ⊆ is defined by
τ (x) def
= {y ∈ | x − y ≤ τ } .
Definition A.37 (Inner product space) Suppose we are given a vector space .
An inner product space (or pre-Hilbert space) is defined by the tuple ( ,
·, ·),
where
·, · : × → Ê is called an inner product and satisfies the following
properties: For all x, y, z ∈ and c, d ∈ Ê,
x, x ≥ 0 (A.19)
x, x = 0 ⇔ x = 0, (A.20)
cx + dy, z = c
x, z + d
y, z , (A.21)
x, y =
y, x . (A.22)
def √
Clearly, each inner product space is a normed space when defining x =
x, x.
The function
·, · : × → Ê is called generalized inner product if it only
satisfies equation (A.20)–(A.22).
218 Appendix A
where the second step follows from equation (A.21) and the last step is a direct
consequence of equation (A.19). Thus, the inner product is a positive semidefinite
function by definition.
Definition A.42 (Cauchy sequence) A sequence (xi )i∈ in a normed space is said
to be a Cauchy sequence if limn→∞ supm≥n xn − xm = 0. Note that all conver-
gent sequences are Cauchy sequences but the converse is not true in general.
Definition A.44 (Linear operator) Given two Hilbert spaces and , a map-
ping T : → is called linear operator if and only if
In this section we recall the notion of covering and packing numbers as well as
entropy numbers. We present an elementary relation for covering and packing
numbers; for further information the interested reader is referred to Kolmogorov
and Fomin (1957), Carl and Stephani (1990) and Vidyasagar (1997).
The cover B is said to be proper if, and only if, B ⊆ A. The set B ⊆ A is an
ε–packing of A if, for all distinct b, c ∈ B, d (b, c) > ε.
¾
Figure A.5 (Left) ε–covering of the set A ⊂ Ê2 using the Euclidean metric. The black
dots represent the ε–cover. (Right) ε–packing of the same set A ⊂ Ê2 . Note that by
definition we can place balls of radius ε2 around the ε–packing and obtain a set of balls
which have an empty intersection with each other.
The nth inner entropy number ϕ A (n) is defined as the largest ε > 0 such that
ρ
A (ε) > n,
ρ
ϕ A (n) = sup ε > 0
A (ε) ≥ n .
def
Proof The result follows directly from Theorem A.49 noticing that A (ε) and ρ
ρ
A (ε) are non-increasing functions for increasing ε, that is, we know that there
exists a smallest ≥ 0 such that
ρ
A (2" (n) + ) =
A
ρ
A (" A (n)) = n .
ϕ A (n)
This shows the rightmost inequality. The leftmost inequality can be shown in an
analogous way.
Vectors, which are column vectors by definition, and matrices are denoted in bold
face5 , e.g., x or X. Vector components are denoted by subscripts omitting bold
5 This should not be confused with the special symbols E, P, v, f, F and I which denote the expectation value,
the probability measure, the empirical probability measure, the density, the distribution function and the indicator
function, respectively. Whenever the symbol x is already in use, we use x to denote a vector to avoid confusion.
223 Theoretical Background and Basic Inequalities
face, i.e., xi . Note that for matrices we do not omit the bold face, i.e., Xi j is the
element of X in the ith row and j th column. We have for the n × 1 vector x
that x = (x1 , . . . , xn ) = (x1 ; . . . ; xn ), i.e., a comma-separated list creates a row
vector whereas a semicolon-separated list denotes a column vector. The n × n
identity matrix is denoted by In . We omit the index n whenever the size of the
matrix is clear from the context. The vector ei denotes the ith unit vector, i.e.,
all components
are zero except the ith component which is one. The vector 1 is
defined by i ei = (1; . . . ; 1). The main importance of matrix algebra stems from
the following theorem which builds the link between linear operators and matrices.
Theorem A.52 (Linear operators in finite dimensional spaces) All linear oper-
ators φ : Ên → Ê m for finite n, m ∈ Æ admit a representation of the form
φ (x) = Ax ,
where A ∈ Êm×n is called the parameter matrix.
Proof Put x = ni=1 xi ei . By the two properties of linear operators
n n
φ (x) = φ xi ei = xi φ (ei ) = (φ (e1 ) , . . . , φ (en )) x ,
i=1 i=1
A
that is, the columns of the matrix A are the images of the n unit vectors ei ∈ Ê n .
Types of Matrices
Definition A.53 (Square matrix) A matrix A ∈ Ê n×m is called a square matrix if,
and only if, m = n.
∀c ∈ Ê n , c = 0 : c Ac > 0 .
If the inequality only holds with ≥ then A is called positive semidefinite. This is
denoted by A ≥ 0.
Definition A.59 (Singular matrix) A square matrix A is singular if, and only if,
|A| = 0; otherwise the matrix is called a non-singular matrix.
Transpose
(AB) = B A .
Rank
Determinant
Proof The result is trivially true for n = 1. Let us assume the assertion is true of
n ∈ . Then, for any (n + 1) × (n + 1) matrix A, by definition
n+1
n+1
|A| = A1 j ·
A[1 j ]
· (−1)1+ j = A j 1 ·
A[ j 1]
· (−1)1+ j
j =1 j =1
n+1
= A1 j ·
A[1 j ]
· (−1)1+ j =
A
,
j =1
where the second line follows by assumption and the definition of the transpose.
Proof Let us assume that A is lower triangular. The result follows by induction.
The case n = 1 is covered by the Definition A.64. Let us assume the assertion is
true for n ∈ . Then, for any (n + 1) × (n + 1) matrix A, by definition
n+1
n
|A| = A1 j ·
A[1 j ]
· (−1)1+ j = A11 ·
A[11]
= A11 · Ai+1,i+1 ,
j =1 i=1
226 Appendix A
Theorem A.70 (Determinant and rank) For any n × n matrix A with rk (A) < n
we have that |A| = 0.
Proof Since rk (A) < n we know that there exists a column ai of A which can be
linearly combined from the remaining n − 1 columns.Without loss of generality
n−1
suppose that this is the first column a1 , i.e., a1 = j =1 λ j a j +1 . According to
Theorem A.69 we know that
n−1
|A| =
a1 − λ j a j +1 , a2 , . . . , an
= |(0, a2 , . . . , an )| = 0 .
j =1
Proof In order to prove the assertion we use the fact that M can be written as the
product of two partitioned block-triangular matrices, i.e.,
Is 0 A B A − BD−1 C BD−1 Is 0
= .
CA−1 D − CA−1 B 0 Ir 0 Ir C D
Applying Theorems A.67 and A.68 proves the result.
Trace
def
n
tr (A) = Aii .
i=1
228 Appendix A
tr (AB) = tr (BA) .
Inverse
Definition A.75 (Inverse of a matrix) The square matrix A−1 is called the inverse
of A ∈ Ên×n if, and only if,
A−1 A = AA−1 = In .
The inverse exists if, and only if, A is non-singular, i.e., |A| = 0.
Theorem A.76 (Inverse of square matrix) If A and B are any two n × n matrices
and AB = In then B = A−1 .
Proof From Theorem A.67 we know that |A| · |B| = |In | = 1. Hence both A−1
and B−1 exists because |A| = 0 and |B| = 0. As a consequence A−1 · (AB) =
A−1 · In ⇔ B = A−1 which proves the theorem.
Theorem A.77 (Product and transpose of inverses) If the two square matrices A
and B are invertible then
−1 −1
(AB)−1 = B−1 A−1 , A = A .
Proof To prove the first assertion we have to show that B−1 A−1 (AB) = I which
−1
follows from Definition A.75. The second assertion follows from A A =
−1 −1
A A = AA = I = I by virtue of Theorem A.61.
229 Theoretical Background and Basic Inequalities
Proof In order to proof the theorem we need to show that A−1 A = In . For the
i, j –th element of A−1 A we have
1 n
1 n
A−1 A = · (−1)i+l ·
A[il]
· Al j = · (−1)i+l ·
A[il]
· A j l .
ij |A| l=1 |A| l=1
qi j
Proof Put D = I + BC−1 A. First we show that the r.h.s. of equation (A.25) has
the property that its product with (C + AB) equals I, i.e.,
! −1 "
C−1 − C−1 A I + BC−1 A BC−1 (C + AB)
= C−1 − C−1 AD−1 BC−1 (C + AB)
= I + C−1 AB − C−1 AD−1 B − C−1 AD−1 BC−1 AB
= I + C−1 A I − D−1 − D−1 BC−1 A B
= I + C−1 A I − D−1 I + BC−1 A B = I .
D
230 Appendix A
Since both the l.h.s. and r.h.s. of equation (A.25) are square matrices the result
follows from Theorem A.76.
Spectral Decomposition
If A is not only square but also symmetric, it can be shown that all roots of A are
real. Using this result we can prove the following powerful theorem.
Theorem A.84 (Trace and Determinant) For any symmetric n × n matrix A with
eigenvalues λ1 , . . . , λn ,
n
n
|A| = λi , tr (A) = λi .
i=1 i=1
Proof Let us assume the matrix A is positive definite, that is, for all c ∈ Ên ,
c = 0 we know that c Ac > 0. By Theorem A.83 we have A = UU for
U = (u1 , . . . , un ), U U = UU = I and = diag (λ1 , . . . , λn ). Using c = ui
we obtain ui UU ui = λi ui 2 = λi > 0 for all i ∈ {1, . . . , n}.
233 Theoretical Background and Basic Inequalities
Now let us assume that all n eigenvalues λi are strictly positive. Then, for all
vectors c ∈ Ê n , c = 0,
n
c Ac = c UU c = α α = λi αi2 > 0 .
i=1
2
Note that U c = 0 ⇔ c = 0 because U c = c UU c = c2 . The proof for the
case of positive semidefiniteness follows from the same argument.
Quadratic forms
Proof The lengthy proof of the existence of c ∈ Êr has been omitted and can be
found in Kockelkorn (2000). Let us start by proving equation (A.32). Expanding
(A.31) we obtain
D (µ) = a Aa − 2a AXµ + µ X AXµ + µ Y BYµ − 2b BYµ + b Yb
= a Aa − 2µ X Aa + Y Bb + µ X AX + Y BY µ + b Bb
= a Aa − 2µ Cc + µ Cµ + b Bb
= (µ − c) C (µ − c) − c Cc + a Aa + b Bb ,
234 Appendix A
where we used the symmetry of A and B several times. This shows equation (A.32).
In the special case of Y = I we obtain
C = X AX + B . (A.34)
def
Let us introduce the following abbreviation u = a − Xb ⇔ a = u + Xb. Then
we know
Cc = X Aa + Bb = X A (u + Xb) + Bb = X Au + Cb ,
where we have used equation (A.34). Since C > 0 it follows that
c Cc = c CC−1 Cc = X Au + Cb C−1 X Au + Cb
= u AXC−1 X Au + 2u AXb + b Cb . (A.35)
Further, we have
a Aa = (u + Xb) A (u + Xb) = u Au + 2u AXb + b X AXb . (A.36)
Combining equations (A.36), (A.35) and (A.34) thus yields
a Aa + b Bb − c Cc = u Au − u AXC−1 X Au + b X AXb + b Bb − b Cb
= u Au − u AXC−1 X Au + b X AX + B b − b Cb
= u A − AXC−1 X A u .
Finally, using the Woodbury formula given in Theorem A.79 we can write A −
! −1 "−1
AXC−1 X A = A − AXC−1 X A as
−1 −1
A − (AX) C−1 X A = A−1 + X I − C−1 X AX C−1 X
−1
= A−1 + X C − X AX X = A−1 + XB−1 X .
Putting all these results together proves equation (A.33).
Proof According to Theorem A.83 we know that each symmetric A can be written
as A = UU where U U = UU = I and = diag (λ1 , . . . , λn ). For a given
x ∈ Ê n let us consider y = U x. Then we know that
n
x Ax x UU x y y i=1 λi yi
2
= = = . (A.37)
x x x UU x y y yi2
Since λn ≤ λi and λi ≤ λ1 for all i ∈ {1, . . . , n} we know that
n
n
n
λn yi2 ≤ λi yi2 ≤ λ1 yi2 . (A.38)
i=1 i=1 i=1
Kronecker Product
Proof Let us represent the matrices A and B in terms of their rows and C and D
in terms of their columns,
A = a1 ; . . . ; an , ai ∈ Ê m , B = b1 ; . . . ; br , b j ∈ Êq ,
C = (c1 , . . . , cs ) , cu ∈ Ê m , D = (d1 , . . . , dt ) , dv ∈ Ê q .
and i ∈ {1, . . . , n}, j ∈ {1, . . . , r}, u ∈ {1, . . . , s} and v ∈ {1, . . . , t}. Let
def def
i ∗ j = (i − 1) · r + j and u ∗ v = (u − 1) · t + v. Consider the element in
the i ∗ j –th row and u ∗ v–th column of (A ⊗ B) (C ⊗ D)
((A ⊗ B) (C ⊗ D))i∗ j,u∗v = ai1 bj , . . . , aim bj c1u dv , . . . , cmu dv
m
= ail clu bj dv
l=1
= ai cu · bj dv
= (AC)iu · (BD) j v
= ((AC) ⊗ (BD))i∗ j,u∗v ,
which shows that A ⊗ B has at least mn eigenvectors with eigenvalues given by the
product of all pairs of eigenvalues of A and B. Since all eigenvectors are orthogonal
to each other A ⊗ B ∈ Ê mn×mn has at most mn eigenvectors.
Proof If A and B are positive definite then all eigenvalues of A and B are strictly
positive (see Theorem A.85). Hence, by Theorem A.91 all eigenvalues of A ⊗ B
are strictly positive and thus A ⊗ B is positive definite by Theorem A.85. The case
of positive semidefinite matrices proceeds similarly.
Derivatives of Matrices
= = ... ..
. .
∂x ∂ xi i, j =1 ∂φ1 (x) ∂φn (x)
∂ xm
· · · ∂ xm
∂φ (x)
= 2Ax .
∂x
∂φ(x)
Proof For any i ∈ {1, . . . , n} let us consider the ith element of ∂x
. We have
n n
∂φ (x)
n
n
s=1 xr Ars x s
= r=1
= Ais xs + Ari xr + 2xi Aii = Ax ,
∂ xi ∂ xi s=1 r=1
s =i r =i
∂ (A (x))−1 ∂A (x)
= − (A (x))−1 (A (x))−1 .
∂x ∂x
Proof First note that, for all x ∈ Ê, by definition A (x) (A (x))−1 = I. Since I
does not depend on x we have
Proof Let us consider the i, j –th element of the n × n matrix of derivatives, i.e.,
∂ ln (|A|) /∂Ai j . By the chain rule of differentiation and Definition A.64 we know
n
1 ∂
d ln (|A|) ∂ |A| l=1 Al j · A[l j ] · (−1)
l+ j
∂ ln (|A|)
= · = ·
∂Ai j d |A| ∂Ai j |A| ∂Ai j
1
= · A[i j ]
· (−1)i+ j ,
|A|
because all the
A[l j ]
involve determinants of matrices which do not contain Ai j .
Exploiting the symmetry of A and Theorem A.78 proves the theorem.
The following theorem is given without proof; the interested reader is referred to
Magnus and Neudecker (1999).
The concept of well and ill-posed problems was introduced in Hadamard (1902)
in an attempt to clarify what types of boundary conditions are most natural for
various types of differential equations. The solution to any quantitative problem
usually ends in finding the “solution” y from given “initial data” x,
y = S (x) . (A.40)
We shall consider x and y as elements of metric spaces and with the metrics
ρ and ρ . The metric is usually determined by the formulation of the problem.
Suppose that the concept of solution is defined by equation (A.40).
This section collects some general results which are frequently used in the main
body. Each theorem is followed by its proof.
Theorem A.101 (Lower bound for the exponential) For all x ∈ Ê we have
1 + x ≤ exp (x) ,
with equality if and only if x = 0.
Proof Consider the function f (x) = 1 + x − exp (x). The first and second
derivatives of this function are d fd(x) = 1 − exp (x) and d dfx(x)
2
x 2 = − exp (x). Hence
this function has a maximum at x ∗ = 0 which implies that f (x) ≤ f (0) = 0 ⇔
1 + x ≤ exp (x).
Proof We proof the theorem by induction over d. The theorem is trivially true for
all d = 0 and all x ∈ Ê. Suppose the theorem is true for some d ∈ Æ . Then
d d d+1
d i d i d
(1 + x) d+1
= (1 + x) x = x + xi
i=0
i i=0
i i=1
i − 1
d 0 d
d d d d+1
= x + + x +
i
x
0 i=1
i i −1 d
d
d+1 0 d +1 i d + 1 d+1
= x + x + x
0 i=1
i d +1
d+1
d +1 i
= x ,
i=0
i
where we have used
d d d+1
+ = (A.41)
i i −1 i
in the third line.
Proof Using Theorem A.103 with the factorization (a + b)d = (b (a/b + 1))d
proves the corollary.
Theorem A.105 (Upper bound for the sum of binomials) For any m ∈ Æ and
d ∈ {1, . . . , m} we have
d ! em "d
m
< .
i=0
i d
242 Appendix A
m d−i
Proof The result follows from Theorems A.103 and A.102. Noticing d
≥1
for all i ∈ {0, . . . , d} we see that
d
d ! "d−i
d i
! m "d
m m m m d
≤ =
i=0
i i=0
i d d i=0
i m
! m "d
m
m d i ! m "d d m
≤ = 1+
d i=0
i m d m
! m "d ! em "d
< exp (d) = .
d d
The theorem is proven.
Theorem A.106 (Cauchy-Schwarz inequality (Cauchy 1821)) For any two ele-
ments x and y of an inner product space we have
|
x, y| ≤ x · y .
Proof If y = 0 then the inequality is true because both sides are zero. Assume
then y = 0. For any c ∈ Ê we have
0 ≤
x + cy, x + cy = x2 + 2c
x, y + c2 y2 .
Now put c = −
x,y
y2
to obtain
x, y
0 ≤ x2 −
x, y ⇔ |
x, y| ≤ x · y ,
y2
Note that the Cauchy-Schwarz inequality remains valid even if equation (A.20) is
replaced by y = 0 ⇒ y = 0.
Theorem A.108 (Law of large numbers) For any random variable X with finite
expectation µ = EX X and variance Var (X) we have
1 n
∀ε > 0 : lim PXn
Xi − µ
> ε = 0 (A.42)
n→∞
n i=1
We shall prove this theorem shortly. Now, the problem of large deviations is to
determine how fast the convergence (A.42) happens to be. We would like to know
how likely it is that a mean of n independently identically distributed (iid) numbers
deviates from their common expectation by more than ε > 0. Let us start with a
simple theorem bounding the tail of any positive random variable.
where the third line follows from Theorem A.109 and the last line follows from
the independence of the Xi and Theorem A.15. Now the problem of finding tight
bounds
−1 for large deviations
reduces to the problem of bounding the function
exp sn X − EX X which is also called the moment generating function (see
Feller (1966)).For random variables with finite support the most elegant bound is
due to Hoeffding (1963)6 .
Lemma A.111 Let X be a random variable with EX X = 0, PX (X ∈ [a, b]) = 1.
Then, for all s > 0,
s 2 (b − a)2
EX exp (sX) ≤ exp .
8
x −a b−x
∀x ∈ [a, b] : exp (sx) ≤ exp (sb) + exp (sa) .
b−a b−a
def
Exploiting EX X = 0, and introducing the notation p = − b−a a
we have that
exp ( ps (b − a)) = exp (−sa). Thus, we get
EX exp (sX) ≤ p exp (sb) + (1 − p) exp (sa)
p exp (sb) exp ( ps (b − a)) (1 − p) exp (sa) exp ( ps (b − a))
= +
exp ( ps (b − a)) exp ( ps (b − a))
1 − p + p exp (s (b − a))
=
exp ( ps (b − a))
= exp (g (u)) ,
where
def def
u = s (b − a) ≥ 0 , g (u) = − pu + ln (1 − p + p exp (u)) .
By a straightforward calculation we see that the derivative of g is
dg (u) p exp (u)
= −p + ,
du 1 − p + p exp (u)
therefore g (0) = dg(u)
du
= 0. Moreover,
u=0
EXi Xi = µ. Then, for all ε > 0,
1 n
2nε 2
P Xn Xi − µ > ε < exp − (A.45)
n i=1 (b − a)2
and
1 n
2nε 2
PXn
Xi − µ
> ε < 2 exp − . (A.46)
n i=1
(b − a)2
Proof Noting that EX X − µ = 0 we can apply Lemma A.111 together with
equation (A.44) to obtain
;n ! 2 "
s (b−a)2 2
1 n
i=1 exp 8n 2 s
PXn Xi − µ > ε < = exp (b − a) − sε .
2
n i=1 exp (sε) 8n
Lemma A.113 Let X bea random
variable with EX X = 0, PX (|X| ≤ c) = 1
and σ 2 = Var (X) = EX X2 . Then, for all s > 0,
2
σ
EX exp (sX) ≤ exp (exp (sc) − 1 − sc) .
c2
which needs to be minimized w.r.t. s. Setting the first derivative of the logarithm of
equation (A.47) to zero gives
nσ 2
(c exp (sc) − c) − ε = 0 ,
c2
εc
which implies that the minimum is at s = 1c ln 1 + nσ 2 . Resubstituting this value
in equation (A.47) results in the following bound
2 !!
nσ εc " ! εc "" ε ! εc "
exp 1+ − 1 − ln 1 + − ln 1 +
c2 nσ 2 nσ 2 c nσ 2
ε nσ 2 ! εc " ε ! εc "
= exp − 2 ln 1 + − ln 1 +
c c nσ 2 c nσ 2
!
ε nσ 2 εc "
= exp − 1+ ln 1 + −1
c εc nσ 2
The theorem is proved.
The full power of this theorem becomes apparent if we bound 1 + 1x ln (1 + x)
even further.
2λc
λ nσ 2 + 2
= exp − λc
−1
c 2 + nσ 2
2λ2 c + 2λnσ 2 λ
= exp − +
2cnσ 2 + λc2 c
λ 2
= exp − .
2nσ 2 + λc
Substituting λ = nε proves the theorem.
If all we need is a bound on the probability that X1 + · · · + Xn = 0 we can eliminate
the exponent 2 on ε as opposed to Hoeffding’s and Bernstein’s inequality.
Finally, there exists a further generalization of large deviation bounds when consid-
ering any function f : n → Ê of n random variables X1 , . . . , Xn . Again, we aim
to bound the probability that f (X1 , . . . , Xn ) deviates from EXn f (X1 , . . . , Xn ) .
Before we start we need some additional quantities.
Lemma A.118 For any function f : n → Ê such that for all i ∈ {1, . . . n}
sup | f (x1 , . . . , xn ) − f (x1 , . . . , xi−1 , x̃ , xi+1 , . . . xn )| ≤ ci (A.50)
x∈ n , x̃∈
we know that, for all measures PX , all x ∈ i−1 , all x ∈ and i ∈ {1, . . . , n},
EXn−i |Xi =(x,x) f (x, x, X) − EXn−i+1 |Xi−1 =x f (x, X)
≤ |ci | .
where the third line follows by the triangle inequality and from equation (A.50).
This appendix gives all proofs and derivations of Part I in detail. If necessary the
theorems are restated before proving them. This appendix is not as self-contained
as the chapters in the main body of this book; it is probably best to read it in
conjunction with the corresponding chapter.
In this section we present the proofs of Theorem 2.20 and Corollary 2.21.
Proof of Theorem 2.20. For all r ∈ and all sequences (x1 , . . . , xr ) ∈ r let
K1 , K2 , K+, Kc , K +c , K∗ and K f be the r × r matrices
whose i,j –th element
is
by k1 xi , x j , k2 xi , x j , k1 xi ,x j +k2 xi , x j , c·k1 xi , x j , k1 xi , x j +c,
given
k1 xi , x j ·k2 xi , x j and f (xi )· f x j , respectively. We need to show that K+ , Kc ,
K+c , K∗ and K f are positive semidefinite using only that K1 and K2 are positive
semidefinite, i.e., for all α ∈ Êr , α K1 α ≥ 0 and α K2 α ≥ 0.
1. α K+ α = α (K1 + K2 ) α = α K1 α + α K2 α ≥ 0.
2. α Kc α = c · α K1 α ≥ 0.
2
3. α K+c α = α K1 + c11 α = α K1 α + c 1 α ≥ 0.
4. According to Corollary A.92 the r 2 × r 2 matrix H = K1 ⊗ K2 is positive
is, for all a ∈ Êr , a Ha ≥ 0. Given any α ∈ Êr , let us consider
2
definite,
that
a = α1 e1 ; . . . ; αr er ∈ Êr . Then,
2
r2
r2
r
r
a Ha = ai a j Hi j = αi α j Hi+(i−1)r, j +( j −1)r
i=1 j =1 i=1 j =1
254 Appendix B
r
r
= αi α j k 1 x i , x j k 2 x i , x j = α K ∗ α ≥ 0 .
i=1 j =1
Proof of Corollary 2.21. The first assertion follows directly from propositions 3
and 4 of Theorem 2.20. For the proof of the second assertion note that
∞ ∞
k1 (x, x̃ ) 1 i 1 i
exp = k (x, x̃ ) = 1 + k (x, x̃ ) .
σ 2
i=0
σ i!
2i 1
i=1
σ i! 1
2i
Now using propositions 4 and 5 of Theorem 2.20 and the second assertion of
this corollary proves the third assertion. The last assertion follows directly from
proposition 4 and 5 of Theorem 2.20 as
- -
k1 (x, x̃ ) 1 1
k (x, x̃ ) = ( = · · k1 (x, x̃ ) .
k1 (x, x) · k1 (x̃, x̃) k1 (x, x) k1 (x̃, x̃)
f (x) f (x̃)
In this section we prove that the recursions given in equations (2.26)–(2.27) and
(2.29)–(2.30) compute the kernel functions (2.25) and (2.28), respectively.
255 Proofs and Derivations—Part I
In order to compute the kernel (2.25) efficiently we note that, in the outer sum over
b ∈ s , it suffices to consider all possible substrings of length s that are contained
in u. Hence we can rewrite the kernel kr by
|v|−s+1
r |u|−s+1
kr (u, v) = λ2s Iu[i:(i+s−1)]=v[ j :( j +s−1)]
s=1 i=1 j =1
|u|
|v| |v|−1
|u|−1
= λ Iu i =v j +
2
λ4 Iu[i:(i+1)]=v[ j :( j +1)] + · · ·
i=1 j =1 i=1 j =1
|u|
|v|
= λ2 Iu i =v j + λ2 Iu[i:(i+1)]=v[ j :( j +1)] + λ2 (· · ·) .
i=1 j =1
The innermost nested sum can be evaluated recursively when we take advantage
of the fact that u [i : (i + s)] = v [ j : ( j + s)] implies that u [i : (i + s + t)] =
v [ j : ( j + s + t)] for all t ∈ . This proves equations (2.26)–(2.27).
The proof that the recursions given in equation (2.29)–(2.30) compute the kernel
given in equation (2.28) proceeds in two stages:
1. First, we establish that (2.30) computes
.
1 if r = 0
kr (u, v) =
|u|+|v|−i1 − j1 +2 . (B.1)
b∈ r {i|b=u[i] } {j|b=v[j] } λ otherwise
2. Second, we directly show that (2.29) holds.
In order to prove equation (B.1) we analyze three cases:
1. If either |u| < r or |v| < r we know that one of the sums in equation
(B.1) is zero because u or v cannot contain a subsequence longer than the strings
themselves. This justifies the first part of (2.30).
2. If r = 0 then the second part of (2.30) is equivalent to the first part of equation
(B.1).
3. For a given character u ∈ and a given
string v ∈ ∗ consider Mu =
{b ∈ r | br = u } and Ju = j ∈ {1, . . . , |v|}r
v jr = u , i.e., all subsequences
256 Appendix B
of length r such that the last character in b equals u and all index vectors over v
such that the last indexed character equals u. Then we know that
λ|uu s |+|v|−i1 − j1 +2
b∈ r {i|b=(uu s )[i] } {j|b=v[j] }
= λ|uu s |+|v|−i1 − j1 +2 + λ|uu s |+|v|−i1 − j1 +2
b∈Mu s {i|b=(uu s )[i] } j∈Ju s {j|b=v[j] }\ Ju s
|uu s |+|v|−i1 − j1 +2
+ λ .
b∈ r \Mu s {i|b=(uu s )[i] } {j|b=v[j] }
Since for all remaining subsequences b ∈ r the last character does not match
with u s we can summarize the remaining terms by
λ· λ|u|+|v|−i1 − j1 +2 .
b∈ r {i|b=u[i] } {j|b=v[j] }
kr (u,v)
It remains to prove that (2.29) is true. Again, we analyze the two different cases:
1. If either |u| < r or |v| < r we know that one of the sums in equation
(2.28) is zero because u or v cannot contain a subsequence longer than the strings
themselves. This justifies the first part of (2.29).
2. Let Mu and Ju be defined as in the previous analysis. Then we know
kr (uu s , v) = λl(i)+l(j)
b∈Mu s {i|b=(uu s )[i] } j∈Ju s
257 Proofs and Derivations—Part I
+ λl(i)+l(j)
b∈Mu s {i|b=(uu s )[i] } {j|b=v[j] }\ Ju s
+ λl(i)+l(j) .
b∈ r \Mu s {i|b=(uu s )[i] } {j|b=v[j] }
Since the remaining sums run over all b ∈ r where br is not equal to u s (or to
any symbol in v if it matches with u s ) they can be computed by kr (u, v). This
completes the proof that the recursion given in equations (2.29)–(2.30) computes
(2.28).
In this section we present the proof of Theorem 2.29 also found in Schölkopf et al.
(2001).
Proof Let us introduce the mapping : → defined by
(x) = k (x, ·) .
Since k is a reproducing kernel, by equation (2.36) we know that
∀x, x̃ ∈ : ( (x)) (x̃) = k (x, x̃ ) =
(x) , (x̃) . (B.2)
Now, given x = (x1 , . . . , xm ), any f ∈ can be decomposed into a part that exists
in the span of the (xi ) and a part which is orthogonal to it,
m
f = αi (xi ) + v
i=1
258 Appendix B
with equality occurring if, and only if, v = 0. Hence, setting v = 0 does not affect
the first term in equation (2.38) while strictly reducing the second term—hence
any minimizer
must have v = 0. As a consequence, any minimizer takes the form
f = m i=1 α i (xi ), so, using equation (B.2)
m
f (·) = αi k (xi , ·) .
i=1
Proof Suppose wt is the final solution vector after t mistakes. Then, by the
algorithm in Section D.1 on page 321 the last update step reads
wt = wt −1 + yi xi .
where the last step follows from repeated applications up to step t = 0 where by
assumption w0 = 0. Similarly, by definition of the algorithm,
≤ wt −1 + ς ≤ · · · ≤ tς 2 .
2 2
Here, we give a derivation of the dual optimization problems of SVMs. For the
def
sake of understandability we denote by Y = diag (y1 , . . . , ym ) the m × m diagonal
def m,m
matrix of classes (−1 and +1) and by G = xi , x j i, j =1 the m × m Gram matrix.
260 Appendix B
1
m
L (w, α) = w2 − αi yi
xi , w + α 1 .
2 i=1
Substitution into the primal Lagrangian yields the Wolfe dual, that is,
α̂ = argmax W (α)
0≤α
1 1
W (α) = α YGYα − α YGYα + α 1 = α 1 − α YGYα .
2 2
Now consider the case involving the linear soft margin loss (see equation (2.48)).
First, let us multiply the objective function by the constant C = 2λm
1
which would
not change the solution but render the derivation much easier. Expressed in terms
of the primal Lagrangian the solution ŵ can be written as
! "
ŵ, ξ̂ , α̂, β̂ = argmin argmax L (w, ξ , α, β) ,
w∈ ,0≤ξ 0≤α,0≤β
1 m
L (w, ξ, α, β) = w2 + Cξ 1 − αi yi
xi , w + α 1 − α ξ − β ξ
2 i=1
1 m
= w −
2
αi yi
xi , w + α 1 + ξ (C1 − α − β) .
2 i=1
1 Note that the constant positive factor of 12 does not change the minimum.
261 Proofs and Derivations—Part I
The corresponding dual is found by differentiation w.r.t. the primal variables w and
ξ , that is,
∂ L (w, ξ , α, β)
m m
= ŵ − α y x
i i i = 0 ⇔ ŵ = αi yi xi ,
∂w w=ŵ i=1 i=1
∂ L (w, ξ , α, β)
= C1 − α − β = 0 ⇔ α = C1 − β . (B.3)
∂ξ ξ =ξ̂
Substituting these stationarity conditions into the primal Lagrangian we obtain the
following dual objective function
! "
α̂, β̂ = argmax W (α, β) ,
0≤α,0≤β
1 1
W (α, β) = α YGYα − α YGYα + α 1 = α 1 − α YGYα .
2 2
Since β ≥ 0, the second stationarity condition (B.3) restricts each αi to be less than
or equal to C. As a consequence the final Wolfe dual is given by
α̂ = argmax W (α) ,
0≤α≤C1
1
W (α) = α 1 − α YGYα .
2
Consider the quadratic soft margin loss given by equation (2.49). Again, let us
1
multiply the objective function by the constant 2λm . Expressed in terms of the
primal Lagrangian the solution ŵ can be written as
! "
ŵ, ξ̂ , α̂, β̂ = argmin argmax L (w, ξ , α, β) ,
w∈ ,0≤ξ 0≤α,0≤β
1 1 m
L (w, ξ , α, β) = w2 + ξξ− αi yi
xi , w + α 1 − α ξ − β ξ
2 2λm i=1
m
1 1
= w −
2
αi yi
xi , w + α 1 + ξ ξ −α−β .
2 i=1
2λm
262 Appendix B
The corresponding dual is found by differentiation w.r.t. the primal variables w and
ξ , that is,
∂ L (w, ξ , α, β)
m m
= ŵ − α i yi xi = 0 ⇔ ŵ = αi yi xi ,
∂w w=ŵ i=1 i=1
∂ L (w, ξ , α, β)
1
= ξ̂ − α − β = 0 ⇔ ξ̂ = λm (α + β) .
∂ξ ξ =ξ̂ λm
Substituting the stationarity conditions into the primal we obtain
! "
α̂, β̂ = argmax W (α, β) ,
0≤α,0≤β
1 1
W (α, β) = α YGYα − α YGYα + α 1 + λm (α + β) (α + β) − α − β
2 2
1 1
= α YGYα − α YGYα + α 1 + λm (α + β) − (α + β)
2 2
1 λm
= α 1 − α YGYα − α + β2 .
2 2
Noticing that decreasing β will always lead to an increase in W (α, β), we simply
set β̂ = 0. Hence, the final Wolfe dual is given by
α̂ = argmax W (α) ,
0≤α
1 λm
W (α) = α 1 − α YGYα − α α.
2 2
B.5.4 ν–Linear Margin Loss SVM
Now consider the case involving the linear soft margin loss and the reparame-
terization by ν ∈ [0, 1] (see equation (2.52)). Expressed in terms of the primal
Lagrangian the solution w∗ can be written as
! "
ŵ, ξ̂ , ρ̂, α̂, β̂, δ̂ = argmin argmax L (w, ξ , ρ, α, β, δ) ,
w∈ ,0≤ξ ,0≤ρ 0≤α,0≤β,0≤δ
1 1
L (w, ξ, ρ, α, β, δ) = w + ρ α 1 − ν − δ + ξ
2
1−α−β
2 m
m
− αi yi
xi , w .
i=1
263 Proofs and Derivations—Part I
= ŵ − α i yi xi = 0 ⇔ ŵ = αi yi xi ,
∂w w=ŵ i=1 i=1
∂ L (w, ξ , ρ, α, β, δ)
1 1
= 1 − α − β = 0 ⇔ α = 1 − β , (B.4)
∂ξ ξ =ξ̂ m m
∂ L (w, ξ , ρ, α, β, δ)
= α1 − ν − δ = 0 ⇔ α1 = ν + δ . (B.5)
∂ρ ρ=ρ̂
1
W (α) = − α YGYα .
2
Here we give the proof of Theorem 2.37. The proof is adapted from the original
proof given in Jaakkola and Haussler (1999b); we do not have to enforce convexity
of the potential function J and have dropped the assumption that αi ∈ [0, 1].
Proof of Theorem 2.37. The basic idea of this proof is to find an expression of
the leave-one-out error of an algorithm using only the coefficients α̂ obtained by
learning on the whole training sample. Thus we try to relate a leave-one-our error at
264 Appendix B
the tth example with the coefficients obtained by joint maximization of the function
W given by
1 m m
m
W (α) = − αi α j yi y j k xi , x j + J (αi ) .
2 i=1 j =1 i=1
Some notational comments are in order: Subscripts on W refer to the left out
example, the subscript on α to the value of the particular component, and the
superscripts on α to the maximizer for the corresponding functions W .
Leaving out the tth example we know that the remaining α’s are obtained by
the maximization of
1 m m
m
Wt (α) = − αi α j yi y j k xi , x j + J (αi ) .
2 i=1 j =1 i=1
i =t j =t i =t
where we used the symmetry of the Mercer kernel k. Note that the last two terms
in (B.6) do not change the maximum because they only depend on the fixed value
α̂t . As a consequence we shall omit them in the following argument. Since α̂
265 Proofs and Derivations—Part I
m
m
−α̂t yt αit yi k (xi , xt ) ≤ −α̂t yt α̂i yi k (xi , xt ) − Wt α t − Wt α̂ .
i=1 i=1
i =t i =t
In order to find the maximum of the density fTm+1 |X=x,Zm =z we use Bayes’ theorem
fTm+1 |X=x,Zm =z ((t, t)) = fTm+1 |X=x,Xm =x,Ym = y ((t, t))
PYm |Tm+1 =(t,t ),Xm =x,X=x ( y) fTm+1 |Xm =x,X=x ((t, t))
=
PYm |Xm =x,X=x ( y)
PYm |Tm =t,Xm =x ( y) fTm+1 |Xm =x,X=x ((t, t))
= ,
PYm |Xm =x ( y)
where we use the fact that the test object x ∈ and its associated latent variable T
have no influence on the generation of the classes y at the training objects x. Now,
taking the logarithm will not change the maximum
but will render optimization
ˆ
much easier. Hence, we look for the vector t̂, t which maximizes
J (t, t) = ln PYm |Tm =t,Xm =x ( y) + ln fTm+1 |Xm =x,X=x ((t, t)) − ln PYm |Xm =x ( y) .
Q 1 (t) Q 2 (t,t )
Note that the last term is a normalization constant which does not depend on
(t, t) and can thus be omitted from the optimization. Let us start by considering
the second term Q 2 (t, t) which effectively builds the link between Gaussian
processes for regression and for classification. By assumption2 PTm+1 |Xm =x,X=x =
Normal (0, Gm+1 ) and thus, according to Definition A.26, this term is given by
1 −1 t
Q 2 (t, t) = − (m + 1) ln (2π ) + ln (|Gm+1 |) + t , t Gm+1 , (B.7)
2 t
where
XX Xx Gm Xx M m
Gm+1 = = and G−1 = (B.8)
x X x x x X x x m+1 m κ
are the (m + 1) × (m + 1) Gram matrix and its inverse of the training and test
object(s). Using Theorem A.80 for the inverse of a partitioned matrix Gm+1 we
know that G−1
m+1 can be written as in equation (B.8) where
1 −1
M = G−1
m + mm , m = −κG−1
m Xx , κ = x x − x X G−1
m Xx . (B.9)
κ
2 For the sake of understandability we consider a regression model without any variance σt2 . Note, however,
that we can always incorporate the variance afterwards by changing the kernel according to equation (3.15) (see
Remark 3.10). This is particularly important if Gm+1 is not of full rank.
267 Proofs and Derivations—Part I
Combining equations (B.12) and (B.13) we obtain the following revised objective
function J (t) to be maximized over t ∈ Ê m
1 m
1
J (t) = ( y + 1) t − ln 1 + exp β −1 · ti − t G−1
m t +c. (B.14)
2β i=1
2
268 Appendix B
where π( t̂) = (π(tˆ1 ), . . . , π(tˆm )) . As can be seen from this expression, due to the
term π( t̂), it is not possible to compute the roots t̂ of this equation in a closed form.
We use the Newton-Raphson method,
−1 ∂ J (t)
t i+1 = t i − η · H t i · ,
∂ t
t=t i
B.7.2 Computation of
matrix −1 is given by
−1 M+P m
= ,
m κ
where the fourth line follows from the Woodbury formula (see Theorem A.79). In
summary,
(I + Gm P)−1 Gm (I + Gm P)−1 Xx
= −1 .
x X (I + Gm P) x x − x X (I + PGm )−1 PXx
where the third line follows by s = −t and the assumption that tˆ = 0. Since
tˆ = t̂ G−1
m Xx we know that the Gaussian process classification function is given by
m
h GPC (x) = sign αi
xi , x , α = G−1
m t̂ .
i=1
1 m
1
J (α) = ( y + 1) Gm α − ln 1 + exp β −1 · gi α − α Gm α .
2β i=1
2
1
P= · diag a g1 α , . . . , a gm α .
β
271 Proofs and Derivations—Part I
In this section we derive an explicit update rule for computing the parameter vector
θ̂ and σ̂t2 which locally maximizes the evidence fTm |Xm =x (t). In order to ease the
optimization we consider the log-evidence given by
1
2
σ Im + XX
+ t σ 2 Im + XX −1 t .
E θ, σt2 = − m ln (2π ) + ln
2 t
t
Q 1 (θ,σt )
2 Q 2 (θ,σt )
2
Due to its length we have divided the derivation into several parts. Afterwards,
we derive the relevance vector machine algorithm for classification using ideas
already outlined in Section B.7. We shall compute the weight vector µ ∈ Ên
which maximizes fW|Zm =z together with the covariance matrix ∈ Ê n×n defined
in equation (3.27).
Let us start with Q 1 θ , σt2 . According to Theorem A.72 we know
−1
2
·
σ Im + XX
=
σ 2 Im
·
−1 + σ −2 X X
,
t t t
which implies
Q 1 θ, σt2 = ln
σt2 Im
+ ln
−1 + σt−2 X X
− ln
−1
2
−1
n
= m ln σt + ln
+ ln (θi ) . (B.18)
i=1
−1
Here we use equation (3.24) for the definition of = −1 + σt−2 X X . For the
sake of understandability
compute the derivative of Q 1 component-wise, that
we
is, we compute ∂ Q 1 θ, σt2 /∂θ j . By Theorem A.97 we know
∂ Q 1 θ, σt2 ∂ ln
−1
1 n n
∂ ln
−1
∂ −1 rs 1
= + = −1 · +
∂θ j ∂θ j θj r=1 s=1 ∂ rs
∂θ j θj
n n
∂ −1 + σt−2 X X rs 1
= rs · +
r=1 s=1
∂θ j θj
1 1
= − 2jj .
θj θj
Now, let us consider the second term Q 2 θ, σt2 . First, we use the Woodbury
formula (see Theorem A.79) to obtain
2 −1 −1
σt Im + XX = σt−2 Im − σt−4 X In + σt−2 X X X
−1
= σt−2 Im − σt−4 X −1 + σt−2 X X X
−1
= σt−2 Im − σt−4 X −1 + σt−2 X X X
= σt−2 Im − σt−2 XX , (B.19)
by exploiting the definition of , as given in equation (3.24). Using the fact that
µ = σt−2 X t = τ and the abbreviation τ = σt−2 X t we can rewrite Q 2 by
−1
Q 2 θ, σt2 = t σt2 Im + XX t = σt−2 t (t − Xµ) = σt−2 t t − τ µ .
Then the derivative of Q 2 w.r.t. to θ j is given by
∂ Q 2 θ, σt2 ∂ σt−2 t t − τ µ ∂µ
= = −τ ,
∂θ j ∂θ j ∂θ j
273 Proofs and Derivations—Part I
because µ is the only term that depends on θ j . Using Theorem A.96 we know
−1
∂µ ∂τ ∂ ∂ −1 ∂ −1 1
= = τ= τ = − τ = − − 2 1 j j τ ,
∂θ j ∂θ j ∂θ j ∂θ j ∂θ j θj
where 1 j j ∈ Ê n×n is used to denote a matrix of zeros except for the j, j –th element
which is one. As a consequence,
∂ Q 2 θ, σt2 ∂µ 1 1 µ2j
= −τ = −τ 1 jj τ = −µ 1 jj µ = − ,
θj ∂θ j θ 2j θ 2j θ 2j
In order to compute the derivative w.r.t. σt2 we again consider Q 1 and Q 2 separately.
Using Theorem A.97 we obtain
∂ Q 1 θ, σt2 ∂ ln
σt2 Im + XX
=
∂σt2 ∂σt2
∂ ln
σt2 Im + XX
∂ σt2 Im + XX
m m
= 2 · rs
m
−1 ! −1 "
= σt2 Im + XX rr = tr σt2 Im + XX .
r=1
Using equation (B.19) together with µ = σt−2 X t (see equation (3.24)) the
innermost term in the latter expression can be rewritten as
2 −1
σt Im + XX t = σt−2 Im − σt−2 XX t = σt−2 (t − Xµ) ,
which then leads to
! " !
∂ Q 2 θ, σt2
−1
"
−1 t − Xµ2
= − σ 2
t mI + XX t σ 2
t mI + XX t = − .
∂σt2 σt4
Putting both results finally gives the derivative of E w.r.t. σt2
! n ! ""
ii
∂ E θ , σt2
1 σ t
2
m − i=1 1 − θi
− t − Xµ 2
=− . (B.21)
∂σt2 2 σt4
Although we are able to compute the derivative of the evidence E w.r.t. its param-
eters θ and σt2 (see equations (B.20) and (B.21)) we see that we cannot explicitly
compute their roots because the terms ii and µi involve the current solution θ
and σt2 . However,
inmorder to maximize the evidence (or log-evidence) w.r.t. the
parameters θ ∈ Ê + and σt2 ∈ Ê+ we exploit the fact that any rearrangement of
the gradient equation
∂ E θ, σt2
! "
= θ̂ − g θ̂ ,
∂θ
θ=θ̂
275 Proofs and Derivations—Part I
allows us to use the update rule θ new = g (θ old ) to compute a (local) maximum of
E, i.e., the fixpoint of g : Ên → Ê n . A closer look at equation (B.20) shows that
θi(new) = ii + µ2i , (B.22)
is a valid update rule. Introducing ζi = 1 − θi−1 ii we see that another possible
update rule is given by
µ2i
θi(new) = , (B.23)
ζi
which follows from (B.22) as
θi = ii + µ2i , ⇔ θi − ii = µ2i , ⇔ θi ζi = µ2i .
In practice it has been observed that the update rule given in equation (B.23)
leads to faster convergence although it does not benefit from the guarantee of
convergence. According to equation (B.21) we see that
(new) t − Xµ2
σt2 = ,
m − ni=1 ζi
is an update rule which has shown excellent convergence properties in our experi-
ments.
In the relevance
vector
machine algorithm it is necessary to compute the log-
evidence E θ, σt2 to monitor convergence. The crucial quantity for the compu-
tation of this quantity is the covariance matrix ∈ Ên×n and its inverse −1 . In
order
to save
computational time we use equation (B.18) to efficiently compute
Q 1 θ, σt2 ,
−1
n
Q 1 θ, σt2 = m ln σt2
+ ln
+ ln (θi ) .
i=1
In order to find the maximum µ ∈ Ên of the density fW|Zm =z we use Bayes’ theorem
PYm |W=w,Xm =x ( y) fW (w)
fW|Zm =z (w) = fW|Xm =x,Ym = y (w) = ,
PYm |Xm =x ( y)
where we exploit the fact that fW|Xm =x = fW as objects have no influence on weight
vectors. Taking logarithms and dropping all terms which do not depend on w we
end up looking for the maximizer µ ∈ Ê n of
J (w) = ln PYm |W=w,Xm =x ( y) + ln fW (w) .
1 m
1
J (w) = ( y + 1) Xw − ln 1 + exp β −1 · xi w − w −1 w + c ,
2β i=1
2
Note that equation (B.29) is quadratic in ρ2 and has the following solution
#
ρ2 = −ρ1
s, t ± ρ12 (
s, t)2 − ρ12 + 1 . (B.31)
A
Let us substitute equation (B.31) into the l.h.s. of equation (B.30). This gives the
following quadratic equation in ρ1
(1 − A + ρ1
s, t) (1 − A − ρ1
s, t) + ρ12 = 2µ2 (1 −
s, t)
(1 − A)2 − ρ12 (
s, t)2 + ρ12 = 2µ2 (1 −
s, t)
1− A = µ2 (1 −
s, t)
2
ρ1 (
s, t) − 1 + 1
2 2
= µ2 (1 −
s, t) − 1 , (B.32)
Inserting this formula back into equation (B.31), and making use of the identity
(B.32), we obtain for ρ2
#
ρ2 = −ρ1
s, t ± ρ12 (
s, t)2 − 1 + 1 = −ρ1
s, t ± µ2 (1 −
s, t) − 1 .
m
1! "
= − n ln (2π ) + ln (||) + xi − µ yi −1 xi − µ yi .
i=1
2
Let us start with the maximizer w.r.t. the mean vectors µ y . Setting the derivative to
zero we obtain, for both classes y ∈ {−1, +1},
∂ L µ y ,
= −1 xi − −1 µ̂ y = 0 ,
∂µ y
(x ,y)∈z
µ y =µ̂ y i
1
µ̂ y = xi , (B.33)
m y (x ,y)∈z
i
This appendix gives all proofs and derivations of Part II in detail. If necessary the
theorems are restated before proving them. This appendix is not as self-contained
as the chapters in the main body of this book; it is probably best to read it in
conjunction with the corresponding chapter.
In this section we present the proof of Theorem 4.7. It involves several lemmas
which will also be of importance in other sections of this book. We shall therefore
start by proving these lemmas before proceeding to the final proof. The version of
the proof presented closely follows the original paper Vapnik and Chervonenkis
(1971) and its polished version, found in Anthony and Bartlett (1999).
In this section we prove three basic lemmas—the key ingredients required to obtain
bounds on the generalization error in the VC, PAC and luckiness frameworks. The
original proof of Lemma C.1 is due to Vapnik and Chervonenkis (1971). We shall
present a simplified version of the proof, as found in Devroye et al. (1996, p. 193).
The proof of Lemma C.3 is the solution to Problem 12.7 in Devroye et al. (1996,
p. 209) and is only a special case of Lemma C.2 which uses essentially the same
technique as the proof of Lemma C.1. In order to enhance readability we shall use
def
the shorthand notation z [i: j ] = z i , . . . , z j .
282 Appendix C
¾
ÚÞ Þ ÚÞÞ Þ
Figure C.1 Graphical illustration of the main step in the basic lemma. If v z (A (z))
deviates from PZ (A (z)) by at least ε but v z̃ (A (z)) is ε2 -close to PZ (A (z)) then v z (A (z))
and v z̃ (A (z)) deviate by at least 2ε .
Lemma C.1 (VC basic lemma) For all probability measures PZ and all subsets
of the σ –algebra over the sample space , if mε 2 > 2 we have
ε
PZm sup |P ( A) − vZ ( A)| > ε < 2PZ2m sup vZ[1:m] ( A) − vZ[(m+1):2m] ( A) > .
A∈ A∈ 2
Q 2 ( z z̃)
Lemma C.2 (Luckiness basic8 lemma) For all probability measures PZ , all mea-
surable logical formulas ϒ : ∞ m=1 → {true, false} and all subsets of the
m
where we have used the assumption mε > 2 of the theorem again. In summary
! ! ε ""
PZ2m ∃A ∈ : ϒ Z[1:m] ∧ vZ[1:m] ( A) = 0 ∧ vZ[(m+1):2m] (A) >
2
1
≥ PZm1 Zm2 Q 1 (Z) ∧ Q 2 Z[1:m] > PZm (Q 2 (Z) ∧ Q 3 (Z))
2
1
= PZm (∃A ∈ : (ϒ (Z)) ∧ (vZ ( A) = 0) ∧ (PZ ( A) > ε)) .
2
The lemma is proved.
Lemma C.3 (PAC basic lemma) For all probability measures PZ and all subsets
of the σ –algebra over the sample space , if mε > 2 we have
PZm (∃A ∈ : (vZ ( A) = 0) ∧ (PZ ( A) > ε))
! ! ε ""
< 2PZ2m ∃A ∈ : vZ[1:m] ( A) = 0 ∧ vZ[(m+1):2m] ( A) > .
2
Proof Using ϒ (z) = true in Lemma C.2 proves the assertion.
Proof Let us start by proving equation (4.11). First, we note that, due to Lemma
C.1, it suffices to bound the probability1
!
ε "
PZ2m ∃h ∈ :
Remp h, (Z1 , . . . , Zm ) − Remp h, (Zm+1 , . . . , Z2m )
> .
2
J (Z)
Since all 2m samples Zi are drawn iid from PZ we know that, for any permutation
π : {1, . . . , 2m} → {1, . . . , 2m},
PZ2m ( J (Z)) = PZ2m ( J ( (Z))) ,
where we use the shorthand notation (Z) to denote the action of π on the
def
indices of Z = (Z1 , . . . , Z2m ), i.e., ((Z1 , . . . , Z2m )) = Zπ(1) , . . . , Zπ(2m) . Now
consider all 2m different swapping permutations s indexed by s ∈ {0, 1}m , i.e.,
s (z) swaps z i and z i+m if, and only if, si = 1. Using the uniform measure PSm
1 Note that in due course of the proof we use the symbol z (and Z) to refer to a (random) training sample (drawn
iid from PZ ) of size 2m.
285 Proofs and Derivations—Part II
where we used the definition of Remp [h, (z 1 , . . . , z m )] given in equation (22) and
z = ((x1 , y1 ) , . . . , (x2m , y2m )). Since we consider the uniform measure over swap-
pings we know that each summand over
j ∈ {1, . . . , m} is a uniformly dis-
tributed random variable with outcomes ±
Iĥ x
! " − I ! "
.
= y ĥ x
i = y
πS ( j )
πS ( j ) i πS ( j +m) πS ( j +m)
As a consequence these random variables are always in the interval [−1, +1] with
expectation zero. Thus, a direct application of Hoeffding’s inequality (see Theorem
A.112) yields
(Z)
mε 2
mε 2
PZ2m ( J (Z)) < EZ2m 2 exp − = EZ2m (Z) · 2 exp − .
i=1
8 8
286 Appendix C
Þ½ Þ Þ Þ Þ Þ½ Þ Þ Þ Þ
Using the worst case quantity (2m) = max z∈ 2m (z) completes the proof
def
of assertion (4.11).
The second equation (4.12) is proven in a similar way. First, according to
Lemma C.3 all we need to bound is the probability
! ε"
PZ2m ∃h ∈ : Remp h, (Z1 , . . . , Zm ) = 0 ∧ Remp h, (Zm+1 , . . . , Z2m ) > .
2
J (Z)
Here, the set ĥ 1 , . . . ĥ (z) ∈ are again (z) different hypotheses w.r.t. the
training errors
! " on z = (z 1 , . . . , z 2m ). In contrast to the previous case, the cardinality
mε
of Q z ĥ i is easily upper bounded by 2m− 2 because, whenever we swap any of
287 Proofs and Derivations—Part II
the at least mε
2
patterns x j +m , y j +m that incur a loss on the second m samples, we
violate the assumption of zero training error on the first m samples (see also Figure
C.2). Since we use the uniform measure PSm it follows that
9 :
(Z)
2 · 2 2 = EZ2m (Z) · 2− 2 .
m− mε mε
−m
PZ2m ( J (Z)) ≤ EZ2m
i=1
Bounding the expectation EZ2m (Z) by its maximum (2m) from above
and using 2−x = exp (−x · ln (2)) ≤ exp − x2 for all x ≥ 0 proves equation
(4.12).
In this section we prove Theorem 4.10 using proof by contradiction. This elegant
proof is due to E. Sontag and is taken from Sontag (1998). At first let us introduce
the function : × → defined by
ϑ
def m
(m, ϑ) = .
i=0
i
Defining mi = 0 whenever i > m we see that (m, ϑ) = 2m if ϑ ≥ m because
by Theorem A.103,
m m
m m
(m, m) = = · 1i = (1 + 1)m = 2m . (C.1)
i=0
i i=0
i
Let us start with a central lemma which essentially forms the heart of the proof.
Lemma C.4 Let m ∈ , ϑ ∈ {0, . . . , m} and r > (m, ϑ), and suppose that the
matrix A ∈ {0, 1}m×r is such that all its r columns are distinct. Then, there is some
(ϑ + 1) × 2ϑ+1 sub-matrix of A whose 2ϑ+1 columns are distinct.
Proof We proceed by induction over m ∈ . Note that the lemma is trivially true
for ϑ = m because, according to equation (C.1), (m, m) = 2m . But, each binary
matrix with m rows has at most 2m distinct columns; hence there exists no value
of r. Let us start by proving the assertion for m = 1: We have just shown that
we only need to consider ϑ = 0. Then, the only possible value of r is 2 because
(1, 0) = 1. For this value, however, the only (ϑ + 1) × 2ϑ+1 = 1 × 2 sub-matrix
288 Appendix C
1. r − r1 > (m − 1, ϑ): In this case the inductive assumption appliesϑ+1 to
B C , i.e., this (m − 1) × (r − r1 ) matrix already contains a (ϑ + 1) × 2
sub-matrix as desired to hold for A.
2. r − r1 ≤ (m − 1, ϑ): In this case the inductive assumption applies to B
because r1 = r − (r − r1 ) > (m, ϑ) − (m − 1, ϑ) = (m − 1, ϑ − 1) where
the last step follows from equation (A.41). Since we know that B contains a ϑ × 2ϑ
sub-matrix with 2ϑ distinct columns it follows that
0···0 1···1
B B
If it were the case that (m) > (m, ϑ) then Lemma C.4 states that there is
a sub-matrix with ϑ + 1 rows and all possible distinct 2ϑ+1 columns, i.e., there
exists a subsequence of z of length ϑ + 1 which is shattered by . This is a
contradiction to the maximal choice of the VC dimension ϑ (see equation (4.18))
Hence, (m) ≤ (m, ϑ) which proves Theorem 4.10.
In this section we prove Theorem 4.19 which is the main result in the luckiness
framework. Note that this proof works “inversely”, i.e., instead of upper bounding
the probability that the expected risk is larger than ε by some term δ (ε) and later
solving for ε, we show that our choice of ε guarantees that the above mentioned
probability is less than δ. Let us restate the theorem.
Theorem C.5 Suppose L is a luckiness function that is probably smooth w.r.t. the
function ω. For any δ ∈ [0, 1], any probability measure PZ and any d ∈ ,
PZm (∃h ∈ : Q 1 (Z, h) ∧ Q 2 (Z, h) ∧ Q 3 (h)) < δ ,
where the propositions Q 1 , Q 2 and Q 3 are given by
δ
Q 1 (z, h) ≡ Remp [h, z] = 0 , Q 2 (z, h) ≡ ω L (z, h) , ≤ 2d ,
4
2 4
Q 3 (h) ≡ R [h] > ε (m, d, δ) , and ε (m, d, δ) = d + ld .
m δ
290 Appendix C
The result (4.23) follows from the fact that the negation of the conjunction says
that, for all hypotheses h ∈ , either Remp [h, z] = 0 or ω (L (z, h) , δ/4) > 2d or
R [h] ≤ ε (m, d, δ).
Proof Due to the length of the proof we have structured it into three steps.
def
We will abbreviate ε (m, d, δ) by ε and will use the shorthand notation z [i: j ] =
def
z i , z i+1 , . . . , z j . If Q i and Q j are propositions, Q i j = Q i ∧ Q j .
Upper bounding the probability of samples where the growth function increases
too much
Symmetrization by Permutation
By defining Q 5 (z, h) = Q 2 z [1:m] , h ∧ (¬S (z)), that is,
Q 5 (z, h) ≡ L (z, h) ≤ 2d ,
we see that the proposition J (z) ∧ (¬S (z)) is given by
Q (z) ≡ ∃h ∈ : Q 1 z [1:m] , h ∧ Q 4 z [(m+1):2m] , h ∧ Q 5 (z, h) .
Now we shall use a technique which is known as symmetrization by permutation
and which is due to Kahane (1968) according to van der Vaart and Wellner (1996).
Since all 2m samples Zi are drawn iid from PZ we know that, for any permutation
π : {1, . . . , 2m} → {1, . . . , 2m},
PZ2m (Q (Z)) = PZ2m (Q ( (Z))) ,
where we use the shorthand notation (Z) to denote the action of π on the
def
indices of Z = (Z1 , . . . , Z2m ), i.e., ((Z1 , . . . , Z2m )) = Zπ(1) , . . . , Zπ(2m) . Now
consider all 2m different swapping permutations s indexed by s ∈ {0, 1}m , i.e.,
s (z) swaps z i and z i+m if, and only if, si = 1. It follows that
PZ2m (Q (Z)) = ES PZ2m |S=s (Q (s (Z))) = EZ2m PS|Z2m =z (Q (S (z)))
for any discrete measure PS . Clearly, PZ2m S = PZ2m PS which implies that
PS|Z2m =z = PS . Hence, if we show that PS (Q (S (z))) is at most 4δ for each
double sample z ∈ 2m , we have proven the theorem. The appealing feature of
the technique is that we can fix z in the further analysis. In our particular case we
obtain that PS (Q (S (z))) is given by
PS ∃h ∈ : Q 1 S (z)[1:m] , h ∧ Q 4 S (z)[(m+1):2m] , h ∧ Q 5 (z, h) , (C.2)
292 Appendix C
where we used the fact that the luckiness function is permutation invariant. Since
the double sample z ∈ 2m is fixed, we can arrange the hypotheses h ∈ in
decreasing order
of their luckiness on the fixed double sample, i.e., i > j ⇒
L (z, h i ) ≤ L z, h j . Now let
def
c (i) =
l0−1 h j (x1 ) , y1 , . . . , l0−1 h j (x2m ) , y2m | j ∈ {1, . . . , i}
be the number of equivalence classes w.r.t. the zero-one loss incurred by the first
i hypotheses. Finally, let i ∗ be such that c (i ∗ ) ≤ 2d but c (i ∗ + 1) > 2d . Then
equation (C.2) can be rewritten as
PS ∃ j ∈ 1, . . . , i ∗ : Q 1 S (z)[1:m] , h j ∧ Q 4 S (z)[(m+1):2m] , h j ,
because by construction we know that h 1 , . . . , h i ∗ +1 are the only hypotheses that
are at least as lucky as h i ∗ +1 on z but L (z, h i ∗ +1 ) > 2d . Since c (i ∗ ) ≤ 2d there
are not more than q ≤ 2d hypotheses ĥ 1 , . . . , ĥ q ⊆ {h 1 , . . . , h i ∗ } which realize
the c (i ∗ ) different zero-one loss function patterns. Thus, by an application of the
union bound we have that the probability in equation (C.2) is bounded from above
by
q ! ! " ! ""
PS Q 1 S (z)[1:m] , ĥ i ∧ Q 4 S (z)[(m+1):2m] , ĥ i .
i=1
Since all 2m samples Zi are drawn iid from PZ we know that, for any permutation
π : {1, . . . , 2m} → {1, . . . , 2m},
PZ2m (Q (Z)) = PZ2m (Q ( (Z))) ,
where we use the shorthand notation (Z) to denote the action of π on the
def
indices of Z = (Z1 , . . . , Z2m ), i.e., ((Z1 , . . . , Z2m )) = Zπ(1) , . . . , Zπ(2m) . Now
consider all 2m different swapping permutations s indexed by s ∈ {0, 1}m , i.e.,
s (z) swaps z i and z i+m if, and only if, si = 1. It follows that
PZ2m (Q (Z)) = ES PZ2m |S=s (Q (s (Z))) = EZ2m PS|Z2m =z (Q (S (z)))
for any discrete measure PS . Clearly, PZ2m S = PZ2m PS which implies that
PS|Z2m =z = PS . Hence, if we show that PS (Q (S (z))) is at most δ for each dou-
ble sample z ∈ 2m and the uniform measure PS on {0, 1}m , we have proven the
theorem. Let d = ϑ (z) be the empirical VC dimension on the fixed doublesam-
ple. By definition, there must exists at least one subsequence z̃ = z i1 , . . . , z id ⊂ z
of length d that is shattered by . The important observation is that any sub-
sequence of length j ∈ {1, . . . , d} of z̃ must also be shattered by because,
otherwise, z̃ is not shattered by . Let j ∗ ∈ [0, m] be such that τ ( j ∗ , δ) · j ∗ = d;
d
j∗ = + ln (δ) . (C.3)
4
Whenever any swapping permutation πs is such that more than , j ∗ - examples
of the subsequence z̃ are swapped into the first half, Q (s (z)) cannot be true
294 Appendix C
because the empirical VC dimension on the first half was at least , j ∗ - + 1 and τ is
monotonically increasing in its first argument. Thus, PS (Q (S (z))) is bounded
from above by
,
j ∗- j ∗-
, j∗
−m d −d 1 ed (eτ ( j ∗ , δ)) j ∗
2 · Sd, j ≤ 2 < d < ,
j =0 j =0
j 2 j∗ 2τ ( j ∗ ,δ)· j ∗
where Sd, j is the number of swappings that swap exactly j of the d examples into
the first half. The second step follows directly from Lemma C.7 and the observation
that 4 j ∗ < d for all δ ∈ 0, 12 . The last step is a consequence of Theorem
A.105 and equation (C.3). In order to complete the proof it suffices to show that
j ∗ ln (eτ ( j ∗ , δ)) − j ∗ · τ ( j ∗ , δ) · ln (2) < ln (δ). Using the definition of τ and
Theorem A.101 we see that the latter term is given by
∗ 1 1
j 1 + ln 4 1 − ∗ ln (δ) − 4 ln (2) 1 − ∗ ln (δ)
j j
1 4 ln (2)
< j ∗ 1 + ln (4) − ∗ ln (δ) − 4 ln (2) + ln (δ)
j j∗
< (4 ln (2) − 1) ln (δ) < 2 ln (δ) < ln (δ) ,
because 1 + ln (4) − 4 ln (2) < 0 and ln (δ) < 0 for all δ ∈ 0, 12 . The theorem is
proved.
C.7 For any double sample z ∈ , for any d ≤ m, for any subsample
2m
Lemma
z̃ = z i1 , . . . , z id ⊂ z and for any j < d/3, the number Sd, j of swappings such
that exactly j examples of z̃ are within the first m examples is bounded by
d
Sd, j ≤ · 2m−d .
j
Proof First, let us assume
that the
subsample z̃ is such that no two indices i p and
i q have the property that
i p − i q
= m (Figure C.3 (left)). Then we observe that
the number Sd, j is exactly dj 2m−d due to the following argument: Since d ≤ m
and no two indices are the swapping counterpart of each other, there must exists
a swapping πs0 such that all examples in z̃ ⊂ z are in the second half. In order to
ensure that exactly j of the d examples are in the first half we have to swap them
back into the first half (starting from s0 (z)). Now there are dj many choices of
distinct examples to swap. Further, swapping any of the m − d examples not in z̃
295 Proofs and Derivations—Part II
Þ Þ Þ Þ Þ Þ Þ Þ Þ Þ Þ Þ Þ Þ
Þ
Þ Þ Þ Þ Þ Þ Þ
Þ Þ Þ Þ Þ Þ
Þ
¼
Þ
¼
Þ
Þ Þ Þ Þ Þ Þ Þ Þ Þ Þ Þ Þ Þ
Figure C.3 Counting swappings such that exactly j of the d = 6 examples (gray
background) are within the first m = 7 examples. (Left) Since no two of the d indices
are the swapping counterpart to each other we can swap all gray examples into the second
half and start counting. (Right) Since z 1 and z 8 are within the d examples there will always
be r = 1 gray example in the first half.
that are in the second half of s0 (z) into the first half would not alter the event;
hence the 2m−d term.
Now let
us assume
that there are r ∈ {1, . . . , j } pairs of indices i p and i q
such that
i p − i q
= m and let Sd, r
j be the number of swappings that satisfy
the condition stated (Figure C.3 (right)). In this case, whatever the swapping, r
examples of z̃ are in the first half, and to make up the number to j a further j − r
indices have to be chosen out of the d − 2r. Hence
d − 2r
r
Sd, = · 2m−d+2r
j
j −r
where
( j − r) (d − j − r)
g ( j, r) = 4 .
(d − 2r) (d − 2r − 1)
296 Appendix C
It is easily verified that, for any r ∈ {1, . . . , j }, the function g attains its maximum
for j = d/2. However, as by assumption j < d/3 we know that
4 (d − 3r) (2d − 3r)
g ( j, r) <
9 (d − 2r) (d − 2r − 1)
in the possible range of j . Hence, the function g ( j, r) is strictly less than 1 if d ≥ 9
because this implies that
d 2 − 9d + 18r > 0 ⇒ 4 (d − 3r) (2d − 3r) < 9 (d − 2r) (d − 2r − 1) .
As a consequence, for d ≥ 9 the result is true because
d
r
Sd, j < S 0
d, j = Sd, j = · 2m−d
j
for all r ∈ {1, . . . , j }. For d < 9 the only possible cases are j = 0 (trivial),
j = 1, r = 1 and j = 2, r ∈ {1, 2} which can easily verified to be true. The
theorem is proved.
This elegant proof of Lemma 4.31 can be found in Bartlett and Shawe-Taylor
(1998) and dates back to Gurvits (1997). We will restate the original proof using
def
the shorthand notation A = ai ∈A ai .
Proof The proof involves two lemmas that make extensive use of the geometry in
an inner product space .
We showthat if S ⊂ γ –shattered by , then every
X is|S|γ
subset S0 ⊆ S satisfies
S0− (S \ S0) ≥ B . At the same time, for all
√
S ⊂ X , some S0 ⊆ S satisfies S0 − (S \ S0 ) ≤ |S|ς . Combining these
two assertions yields
(
|S| γ γ Bς 2
1≥ √ = |S| ⇒ |S| ≤ ,
|S|Bς Bς γ
for every γ –shattered set S. This proves the lemma.
Using
the Cauchy-Schwarz inequality (see Theorem A.106) and the assumption
w y ≤ B, we know
B S0 − (S \ S0) ≥ w y · S0 − (S \ S0 )
4 5
≥ w y, S0 − (S \ S0) ≥ |S| γ .
In the other case, we consider yi = +1 if, and only if, xi ∈ (S \ S0), and use an
identical argument.
Proof The proof uses the probabilistic method (Alon et al. 1991). Suppose S =
{x1 , . . . , xm }. We choose S0 randomly by defining xi ∈ S0 ⇔ Bi = +1, where
B1 , . . . , Bm are independent random variables with PBi (+1) = PBi (−1) = 12 .
298 Appendix C
Then,
2
H 2 I m
EBm S0 − (S \ S0 ) = EBm Bi xi
i=1
9$ %:
m m
= EBm Bi xi , Bjxj
i=1 j =1
9$ %:
m
m
= EBm Bi xi , Bjxj
i=1 j =1
9$ %:
m
= EBm Bi xi , B j x j + Bi xi
i=1 i = j
m 0 1
= EBm Bi · B j xi , x j + EBm Bi xi 2
i=1 i = j
m 0 1
= EBm Bi xi 2 ≤ |S| ς 2 ,
i=1
the minimizer√of f (). A straightforward calculation shows that this value has to
be ∗ (D) = ς D because
d f
∗ 2ς 2 D 2 d 2 f
6
= 2 (D) − = 0,
=2+ √ > 0.
d ∗ ∗
( (D)) 3 2
d = ς D√ ςD
This section presents the quantifier reversal lemma due to David McAllester (see
McAllester (1998)). This lemma is of particular importance in the derivation of
generalization error bounds in the PAC-Bayesian framework (see Section 5.1).
Broadly speaking, if a logical formula acts on two random variables and we have
the formula true for all values of one of the random variables, then the quantifier
reversal lemma allows us to move the all-quantifier over that random variable into
a “all-but-a-fraction-of-δ” statement for a fixed value of the other random variable.
Thus, it provides an upper bound on the conditional distribution of the random
variable.
Lemma C.10 (Quantifier reversal lemma) Let X and Y be random variables and
let δ range over (0, 1]. Let ϒ : × × Ê → {true, false} be any measurable
logical formula on the product space such that for any x ∈ and y ∈ we have
Lemma C.11 Let X be a random variable such that PX ([0, 1]) = 1 and let g be
any measurable, monotonically decreasing function from the interval [0, 1] to the
reals, i.e., g : [0, 1] → Ê is such that x ≥ y implies g (x) ≤ g (y). If
∀δ ∈ [0, 1] : FX (δ) ≤ δ ,
then
&
1
EX g (X) ≤ g (x) dx . (C.5)
0
301 Proofs and Derivations—Part II
where the first line follows from partial integration and the second line uses the fact
that FX (0) = 0 and FX (1) = 1. Since −g (x) is, by assumption, a monotonically
increasing function we know that any positive difference g (x) − g (x̃) > 0 implies
that x − x̃ > 0. Hence for the first integral we can use the upper bound on FX to
obtain
& 1
EX g (X) ≤ x d (−g (x)) + g (1)
0
& 1 & 1
= g (x) dx + [x · (−g (x))]0 + g (1) =
1
g (x) dx .
0 0
Using this lemma we can now proceed to prove the quantifier reversal lemma.
Now, note that for β ∈ (0, 1), g (z) = z β−1 is an monotonically decreasing function
since the exponent is strictly negative. From Lemma C.11 we conclude
&
1
1
∀x ∈ : ∀δ ∈ (0, 1] : EY|X=x f β−1 (x, Y) = ET Tβ−1 ≤ z β−1 dz = .
0 β
302 Appendix C
This section contains the proof of Theorem 5.10. In course of this proof we need
several theorems and lemmas which have been delegated to separate subsections
due to their length.
Proof of Theorem 5.10. Geometrically, the hypothesis space is isomorphic the
unit sphere in Ê n (see Figure 2.8). Let us assume that PW is uniform on the unit
sphere. Given the training sample z ∈ m and a classifier having normal w ∈
we show in Theorem C.13 that the open ball
.
# >
(w) = v ∈
w, v > 1 − z (w) ⊆
2 (C.6)
is fully within the version space V (z). Such a set (w) is, by definition, point
symmetric w.r.t. w and hence we can use − ln (PW ( (w))) to bound the expected
risk of h w . Since PW is uniform on the unit sphere, the value − ln (PW ( (w))) is
303 Proofs and Derivations—Part II
simply the logarithm of the volume ratio of the surface of the unit sphere to the
surface of all v ∈ Ïsatisfying equation (C.6). A combination of Theorem C.14
and C.15 shows that this ratio is given by
' 2π n−2
1 0 sin (θ) dθ
= ln !√
PW ( (w))
ln "
' arccos 1−( z (w)) 2
sin (θ) dθ
n−2
0
1
≤ n · ln ( + ln (2) .
1 − 1 − 2z (w)
Using Theorem 5.4 and Lemma 5.9 and bounding ln (2) by one from above we
obtain the desired result. Note that m points {x1 , . . . , xm } maximally span an
m–dimensional space and, thus, we can marginalize over the remaining n − m
dimensions of feature space Ã. This gives d = min (m, n).
around a linear classifier having normal w of unit length contains classifiers within
version space V (z) only. Here, z (w) is the margin of the hyperplane w on a set
of points normalized by the length xi of the xi (see equation (5.11) for a formal
definition). In order to prove this result we need the following lemma.
Lemma C.12 Suppose à ⊆ n2 is a fixed feature space. Assume we are given two
points w ∈ Ï and x ∈ Ã such that
w, x = γ > 0. Then, for all v ∈ Ï with
-
γ2
w, v > 1 − (C.7)
x2
it follows that
v, x > 0.
Proof Since we only evaluate the inner product of any admissible v ∈ Ï with
w ∈ Ï and x ∈ Ã, we can make the following approach
304 Appendix C
ܾ
Ü
Ü
Û
Ü
ܽ
Û Û
Ú Ü Ú
Ü Ü Û
Figure C.4 Suppose the point x1 (or x2 ) is given.
# We must show that all classifiers
having normal w̃ of unit length and w, w̃ > 1 − γi2 / xi 2 are on the same side of
the hyperplane {v ∈ |
xi , v = 0 }, i.e., ṽ, xi > 0, where γi =
xi , w. From the
picture it is clear #that, regardless of xi , sin (α) = (γi / xi ) or equivalently cos (α) =
(
1 − sin (α) = 1 − γi2 / xi 2 . Obviously, all vectors w̃ of unit length which enclose an
2
angle less than α withw are on the same side (the dark cone). As cos (α) is monotonically
decreasing for α ∈ 0, π2 , these classifiers must satisfy w, w̃ = cos w, w̃ >
#
1 − γi2 / xi 2 .
x x
v=λ +τ w−γ .
x x2
Note that the vectors x
x and w − γ x
x
2 are orthogonal by construction. Further-
more, the squared length of w − γ x2 is given by 1 − γ / x . Therefore, the unit
x 2 2
-
2
γ 1 − λ 2 γ γ2
λ ± 1 − > 1 −
x γ
1 − x
2
2
x 2
x2
-
γ γ2 ! ( "
λ − 1− 1 ± 1 − λ 2 > 0.
x x2
f (λ)
In order to solve for λ we consider the l.h.s. as a function of λ and determine the
range of values in which f (λ) is positive. A straightforward calculation reveals
that [0, λmax ] with
-
2γ γ2
λmax = 1− ,
x x2
Theorem C.13 Suppose ⊆ n2 is a fixed feature space. Given a training sample
z = (x, y) ∈ ( ×({−1, +1})m and w ∈ such that z (w) > 0, for all v ∈
such that
w, v > 1 − 2z (w) we have
∀i ∈ {1, . . . , m} : yi
v, xi > 0 .
parameterize classifiers consistent with the ith point xi . Clearly, the intersection of
all Bi gives the classifiers w which jointly satisfy the constraints yi
w, xi > 0.
Noticing that the size of Bi depends inversely on yi
xi , w we see that all v
such that
w, v > z (w) jointly classify all points xi correctly. The theorem is
proved.
In this subsection we explicitly derive the volume ratio between the largest inscrib-
able ball in version space and the whole parameter space for the special case of
linear classifiers in Ên . Given a point w ∈ Ï and a positive number γ > 0 we can
characterize the ball of radius γ in the parameter space by
Ï
Ï
γ (w) = v ∈
w − v2 < γ 2 = v ∈
w, v > 1 − γ 2/2 .
Ï
vol( )
In the following we will calculate the exact value of the volume ratio
vol( γ (w))
where w can be chosen arbitrarily (due to the symmetry of the sphere).
Theorem C.14 Suppose we are given a fixed feature space à ⊆ n2 . Then the
fraction of the whole surface vol (Ï ) of the unit sphere to the surface vol γ (w)
with Euclidean distance less than γ from any point w ∈ Ï is given by
' π n−2
vol (Ï ) 0 sin (θ) dθ
= .
vol γ (w)
! "
' arccos 1− γ2
2
0 sinn−2 (θ) dθ
Proof As the derivation requires the calculation of surface integrals on the hyper-
sphere in n2 we define each admissible w ∈ Ï by its polar coordinates and carry
out the integration over the angles. Thus we specify the coordinate transformation
τ : Ê n → Ê n from polar coordinates into Cartesian coordinates, i.e., every w ∈ Ï
is expressed via n − 2 angles θ = (θ1 , . . . , θn−2 ) ranging from 0 to π , one angle
0 ≤ ϕ ≤ 2π , and the radius function r (θ, ϕ) which is in the case of a sphere of
constant value r. This transformation reads
τ1 (r, ϕ, θ ) = r · sin(ϕ) sin(θ1 ) · · · sin(θn−2 ) (C.8)
τ2 (r, ϕ, θ ) = r · cos(ϕ) sin(θ1 ) · · · sin(θn−2 )
.. .. .. ..
. . . .
τn−1 (r, ϕ, θ ) = r · cos(θn−3 ) sin(θn−2 )
307 Proofs and Derivations—Part II
If Jn−1 = (j1 , . . . , jn−1 ) ∈ Ê (n−1)×(n−1) is the Jacobian matrix for the mapping τ
when applied for points in Ê n−1 then we see that
sin (θn−2 ) · Jn−1 r · cos (θn−2 ) · j1
Jn = . (C.12)
cos (θn−2 ) 0 · · · 0 −r · sin (θn−2 )
Hence the nth row of this matrix contains only two non-zero elements
∂τn (r, ϕ, θ )
∂τn (r, ϕ, θ )
∂r
= cos (θn−2 ) , ∂θn−2
θn−2
= −r · sin (θn−2 ) .
r
308 Appendix C
Now, using the Laplace expansion of (C.11) in the nth row (see Definition A.64)
we obtain
|Jn | = (−1)n+1 cos (θn−2 )
J[n,1]
− (−1)n+n · r sin (θn−2 )
J[n,n]
,
where J[i, j ] is the (n − 1) × (n − 1) sub-matrix obtained by deletion of the ith row
and the
j th column of Jn . From equation (C.12) and Theorem A.71 it follows that
J[n,n]
= sinn−1 (θn−2 ) · |Jn−1 |. Further we know that
J[n,1]
= |(sin (θn−2 ) · j2 , . . . , sin (θn−2 ) · jn−1 , r · cos (θn−2 ) · j1 )|
= (−1)n−2 · |(r · cos (θn−2 ) · j1 , sin (θn−2 ) · j2 , . . . , sin (θn−2 ) · jn−1 )|
= (−1)n−2 · r · cos (θn−2 ) · sinn−2 (θn−2 ) · |Jn−1 | .
Hence, |Jn | is given by
|Jn | = − cos2 (θn−2 ) · r · sinn−2 (θn−2 ) · |Jn−1 | − r · sinn (θn−2 ) · |Jn−1 | ,
= − |Jn−1 | · r · sinn−2 (θn−2 ) cos2 (θn−2 ) + sin2 (θn−2 )
= − |Jn−1 | · r · sinn−2 (θn−2 ) ,
which, substituted into equation (C.10) gives
vol ( )
Ï ' π n−2
= ' 0$
sin (θn−2 ) dθn−2
, (C.13)
vol γ (w) 0 sin
n−2
(θn−2 ) dθn−2
where $ = arccos 1 − γ 2 /2 . The theorem is proved.
In this section we present a practically useful upper bound for the expression given
in equation (C.13). In order to check the usefulness of this expression we have
compared the exact value of (C.13) with the upper bound and found that in the
interesting regime of large margins the bound seems to be within a factor of 2 from
the exact value (see Figure C.5).
200
exact value exact value
20
log volume ratio
150
15
100
10
50
5
0
0
0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0
x x
(a) (b)
Figure C.5 Comparison of the bound (C.14) (solid line) with the exact value (C.13)
(dashed line) over the whole range of possible values of x for (a) n = 10 and (b) n = 100.
Interestingly, in the relevant regime of large values of x the bound seems to be very tight
regardless of the number of dimensions.
where
2 (i + 1) · 2 (i + 2) · · · · · 2 j
B j,i =
(2i + 1) · (2i + 3) · · · · · (2 j − 1)
2 · 4···2j 1 · 3 · · · (2i − 1)
= ·
1 · 3 · · · (2 j − 1) 2 · 4 · · · (2i)
2i
4 j ( j !)2 (2i)! 4j i
= = i 2 j . (C.16)
(2 j )! (i!)2 4i 4 j
Expanding the first term of the sum we obtain for the logarithm of the fraction of
integrals
S ( j, 1) 2
ln = ln j 2i i .
S ( j, x) 2x + (2x − 1) i=1 x (1 − x)i
i
j
2i 2x (2x)2 j − 1
x (1 − x) ≤
i i
.
i=1
i 2x − 1
Inserting this into the last expression and taking into account that (2x − 1) ≤ 0 in
the relevant regime of x we obtain
S ( j, 1) 2 2
ln ≤ ln = ln
2x + (2x − 1) (
S ( j, x) 2x (2x)2 j −1) (2x)2 j +1
(2x−1)
= − (2 j + 1) ln (x) + ln (2) ,
In the course of the proof of Theorem C.15 we needed a tight upper bound on the
j 2i i
growth of i=1 i x (1 − x)i as a function of x. In the following we present a
series of lemmas resulting in a reasonably accurate upper bound called Bollmann’s
lemma (Lemma C.20).
2 (i + j + 1)
= 2 ,
i + j +1
∞
2i 2x
x i (1 − x)i = .
i=1
i 1 − 2x
∞
2i 1 u 2i+1
arcsin (u) = u + ,
i=1
i 4i 2i + 1
∞
d arcsin (u) 2i 1 2i 1
= 1+ u =√ .
du i=1
i 4i
1 − u2
√
Using u = 2 x (1 − x) we obtain the result, i.e.,
∞
2i 1 ! ( "2i 1
2 x (1 − x) = √ −1
i=1
i 4i 1 − 4x (1 − x)
∞ √
2i i 1 − 1 − 4x (1 − x)
x (1 − x)i = √
i=1
i 1 − 4x (1 − x)
(
1 − (1 − 2x)2
= ( .
(1 − 2x)2
∞ ∞
2 (i + j ) i+ j 2 (i + j + 1) i+ j +1
4x 2 x (1 − x)i+ j ≤ x (1 − x)i+ j +1 .
i=1
i + j i=1
i + j + 1
313 Proofs and Derivations—Part II
def ∞ 2(i+ j ) i+ j
Proof Put A (i, j, x) = i=1 i+ j
x (1 − x)i+ j . Then the result to be
proven simply reads 4x 2 · A (i, j, x) ≤ A (i, j + 1, x). By Lemma C.17 we have
∞
2i 2 ( j + 1) i+ j
x (1 − x)i+ j ≤ 2 · A (i, j, x) .
i=1
i j +1
j
2i i 2x (2x)2 j − 1
x (1 − x) ≤
i
.
i=1
i 2x − 1
i=1
i 2x − 1
2x 1 − (2x)2 j
=
1 − 2x
2x (2x)2 j +1
≤ −
1 − 2x 1 − 2x
314 Appendix C
∞
2i (2x)2 j +1
≤ x (1 − x) −
i i
,
i=1
i 1 − 2x
which is equivalent to
∞
2 j +1 2 (i + j ) i+ j
(2x) ≤ (1 − 2x) x (1 − x)i+ j .
i=1
i + j
where the second line is assumed to be true and the third line follows from Lemma
C.19. The lemma is proved.
In this section we present the proofs of the main theorems from Section 5.3.
In order to enhance the readability of the proofs we use the same notation as
introduced on page 186, that is, given a sample z ∈ m , a natural number
i ∈ {1, . . . , m} and an example z ∈ let
This subsection proves Theorem 5.31. The proof is mainly taken from Bousquet
and Elisseeff (2000).
Proof of Theorem 5.31. In course of the proof we shall often consider the differ-
ence between the functions f z\i and f z obtained by learning on the reduced training
def
sample z \i and the full training sample z. Let us
denote
this difference
by
f =
f z\i − f z . Then we must bound the difference
l f z\i (x) , t − l ( f z (x) , t)
for
any (x, t) ∈ . Using the Lipschitz continuity of l we know that
l f z (x) , t − l ( f z (x) , t)
≤ Cl ·
f z (x) − f z (x)
= Cl · | f (x)| .
\i \i
| f (x)| = |
f, k (x, ·)| ≤ f · k (x, x) . (C.17)
Adding the above two inequalities and exploiting Rreg [ f, z] = Rreg f, z \i +
1
l f , t we obtain
m ( (x i ) i )
1
(l ( f z (xi ) , ti ) − l (( f z + η · f ) (xi ) , ti )) + λ · A ≤ B , (C.18)
m
where A and B are given by
2 2
A = f z 2 + f z\i − f z + η · f 2 − f z\i − η · f ,
B = Rm f z + η f, z \i + Rm f z\i + η f, z \i − Rm f z , z \i − Rm f z\i , z \i .
Using the definition of f allows us to determine A directly
A = 2η −
f z , f + f z\i , f − η f 2
= 2η f z\i − f z , f − η f 2 = 2η (1 − η) f 2 .
Since the loss function l is assumed to be convex in its first argument we know that
for all (x, t) ∈ and all η ∈ (0, 1)
l (( f z + η · f ) (x) , t) − l (( f z ) (x) , t) ≤ η · l f z\i (x) , t − l (( f z ) (x) , t) .
This implies that the following two inequalities hold true
Rm f z + η f, z \i − Rm f z , z \i ≤ η Rm f z\i , z \i − Rm f z , z \i ,
Rm f z\i − η f, z \i − Rm f z\i , z \i ≤ η Rm f z , z \i − Rm f z\i , z \i .
Adding these two inequalities shows that B ≤ 0. Using the Lipschitz continuity of
the loss we see that equation (C.18) can be written as
l (( f z + η · f ) (xi ) , ti ) − l ( f z (xi ) , ti )
f 2 ≤
2η (1 − η) λm
Cl · |( f z + η · f ) (xi ) − f z (xi )| Cl
≤ = | f (xi )| .
2η (1 − η) λm 2 (1 − η) λm
Taking the limit of the latter expression for η → 0 shows that f 2 ≤
Cl
2λm
| f (xi )| which completes the proof.
In this subsection we prove Theorem 5.32. We start with some simple lemmas
which help us to structure the main proof.
317 Proofs and Derivations—Part II
By virtue of Theorem 5.27, for any i ∈ {1, . . . , m} the integrand q (z, (x, t)) is
upper bounded by 2βm thanks to the βm –stability of the learning algorithm .
Thisproves the first assertion.
Similarly, for the second assertion we know that
EZm R [ f Z ] − Rloo , Z can be written as
m & &
1
l ( f z (x) , t) − l f z\i (xi ) , ti dFZ ((x, t)) dFZm (z) .
m i=1 m
Since, for any i ∈ {1, . . . , m}, the example z i = (xi , ti ) is not used in finding f z\i
but has the same distribution as z = (x, t) the latter expression equals
m & &
1
l ( f z (x) , t) − l f z\i (x) , t dFZ ((x, t)) dFZm (z) .
m i=1 m
q(z,(x,t ))
Proof The first assertion follows directly from Theorem 5.27 noticing that, by the
βm –stability of , for all z ∈ m , all z̃ ∈ and all i ∈ {1, . . . , m}
R [ fz] − R fz
≤ EXT
l ( f z (X) , T) − l f z (X) , T
≤ 2βm .
i↔z̃ i↔z̃
In order to prove the second assertion we note that, for all i ∈ {1, . . . , m},
m−1 1
Remp [ f z , z] = Remp f z , z \i + l ( f z (xi ) , ti ) .
m m
m−1 1
Remp f zi↔z̃ , z i↔z̃ = Remp f zi↔z̃ , z \i + l f zi↔z̃ (x̃) , t˜ .
m m
As,
by assumption, is a βm –stable
algorithm, using Theorem 5.27 shows that
Remp f z , z \i − Remp f z , z \i
cannot exceed 2βm . Further, by the finiteness
i↔z̃
of the loss function l it follows that
Remp [ f z , z] − Remp f z , z i↔z̃
≤ 2 m − 1 βm + b < 2βm + b .
i↔z̃
m m m
The proof of the final assertion is analogous and exploiting that, for all i ∈
{1, . . . , m}
1 m
1
Rloo [, z] = l f z\ j x j , t j + l f z\i (xi ) , ti ,
m j =1 m
j =i
1 m ! " 1
Rloo [, z i↔z̃ ] = l f z(i↔z̃)\ j x j , t j + l f z\i (x̃) , t˜ .
m j =1 m
j =i
Taking into account that f z\ j is obtained by learning using a training sample of size
m − 1 the third statement of the lemma follows immediately.
Using these two lemmas allows us to present the proof of Theorem 5.32.
Proof of Theorem 5.32. Let us start with the first equation involving the training
error Remp [ f z , z]. To this end we define the function g (Z) = R [ f Z ]− Remp f Z , Z
of the m random variables Z1 , . . . , Zm . By Lemma C.22 we know that for all
i ∈ {1, . . . , m}
b
sup |g (z) − g (z i↔z̃ )| ≤ 4βm + ,
z∈ m ,z̃∈ m
319 Proofs and Derivations—Part II
This section contains all the pseudocodes of the algorithms introduced in the book.
A set of implementations in R (a publicly available version of S-PLUS) can be
found at http://www.kernel-machines.org/.
In the following subsections we give the pseudocode for support vector machines
and adaptive margin machines. We assume access to a solver for the quadratic
programming problem which computes the solution vector x∗ to the following
problem
1
minimize x Hx + c x
2
subject to A1 x = b1 .
A2 x ≤ b2 ,
l ≤ x ≤ u. (D.1)
Packages that aim to solving these type of problem are, for example MINOS
(Murtagh and Saunders 1993), LOQO (Vanderbei 1994) or CPLEX (CPLEX Op-
timization Inc. 1994). An excellent introduction to the problem of mathematical
programming is given in Hadley (1962), Hadley (1964) and Vanderbei (1997). We
used the PR LOQO package1 of A. Smola together with R, which is a publicly
available version of S-PLUS, for all experiments.
x =
α,
H =
YGY ⇔ Hi j = yi y j k xi , x j ,
c −1m ,
=
l =
0m ,
1
u = 1m .
2λm
We obtain hard margin SVMs for λ → 0 (in practice we used λm = 10−20 ). In the
case of quadratic margin loss we apply a hard margin SVM with the diagonal of H
additively correct by λm · 1 (see Subsection 2.4.2).
Finally, in the case of Adaptive Margin Machines we set the variables as follows:
α 0m
x = , c= ,
ξ 1m
H = 02m,2m ,
A2 = −YGY, −Im ,
b2 = −1m ,
0m
l = ,
0m
In this section we give the pseudocode for both Bayesian linear regression (Al-
gorithm 4) and Bayesian linear classification (Algorithm 5). These algorithms are
also an implementation of Gaussian processes for regression and classification.
Note that the classification algorithm is obtained by using a Laplace approxima-
tion to the true posterior density fTm+1 |X=x,Zm =z . For a Markov-Chain Monte-Carlo
implementation see Neal (1997b).
In this section we give the pseudocode for relevance vector machines—both in the
regression estimation (Algorithm 6) and classification scenario (Algorithm 7). In
order to unburden the algorithms we use the notation w[n] to refer to the vector
326 Appendix D
wn1 , . . . , wn|n| obtained from w by arranging those components indexed by n =
n 1 , . . . , n |n| . As an example consider w = (w1 , . . . , w10 ) and n = (1, 3, 6, 10)
which gives w[n] = (w1 , w3 , w6 , w10 ). We have given the two algorithms in a
form where we delete feature φi if the associated hyper-parameter θi falls below
a prespecified tolerance, which should be close to the maximal precision of the
computer used. This is necessary because, otherwise, the inversion of would be
an ill-posed problem and would lead to numerical instabilities. We can monitor
convergence of the algorithm for classification learning by inspecting the value
J which should only be increasing. In contrast, when considering the regression
estimation case we should use the Cholesky decomposition of the matrix −1 to
efficiently compute the evidence. The Cholesky decomposition of the matrix −1 is
given by −1 = R R where R is an upper triangular matrix (see Definition A.55).
The advantage of this decomposition is that = R−1 R−1 by virtue of Theorem
A.77. Further, having such a decomposition simplifies the task of computing the
327 Pseudocodes
For a more detailed treatment of numerical issues in matrix algebra the interested
reader is referred to Golub and van Loan (1989) and Press et al. (1992). These
algorithms can also be applied to the expansion coefficients α ∈ Ê m in a kernel
classifier model h α (or kernel regression model f α )
m m
f α (x) = αi k (xi , x) , h α (x) = sign αi k (xi , x) .
i=1 i=1
The only difference between the m,m is that w must be replaced by α and X
algorithms
needs to be replaced by G = k xi , x j i, j =1 ∈ Ê m×m . It is worth mentioning that,
in principle, any function k : × → Ê could be used, that is, not only symmetric
positive semidefinite functions corresponding to Mercer kernels are allowed.
Require: The maximum number of iterations, i max ; a tolerance for pruning TOL ∈ Ê+
for i =1, . . . , i max do
n = j ∈ {1, . . . , n}
θ j > TOL (all non-pruned indices)
X ∈ Êm×|n| contains the |n| columns from X indexed by n
θ̃ = θ [n]
! ! ""−1
= σt−2 X X + diag θ̃1−1 , . . . , θ̃|n|
−1
w[n] = σt−2 X t
! "
ζ = 1 − θ̃1−1 · 11 , . . . , θ̃|n|
−1
· |n|,|n|
wn2
θn j = ζj
j
for all j ∈ {1, . . . , |n|}
2
t−
Xw[n]
σt2 = m−ζ 1
end for
return the weight vector w
328 Appendix D
Require: The maximum number of iterations, i max ; a tolerance for pruning TOL ∈ Ê+
w=0
for i = 1, . . . , i max do
n = j ∈ {1, . . . , n}
θ j > TOL (all non-pruned indices)
X ∈ Êm×|n| contains the
! |n| columns from
" X indexed by n
θ̃ = θ [n] ; = diag θ̃1 , . . . , θ̃|n| ; t =
−1 −1 −1
Xw[n]
m
J = 2 ( y + 1) t − i=1 ln (1 + exp (ti )) − 12 w[n] −1 w[n]
1
repeat
π = (1!+ exp (−t1 ))−1" , . . . , (1 + exp (−tm ))−1
g= X 12 ( y + 1) − π − −1 w[n]
H=− X · diag (π1 (1 − π1 ) , . . . , πm (1 − πm )) · X + −1
= H−1 g, η = 1
repeat
w̃ = w; w̃[n] = w[n] − η; t̃ = Xw̃[n]
m
J = 2 ( y + 1) t̃ − i=1 ln 1 + exp t˜i − 12 w̃[n] −1 w̃[n]
1
η ← η2
until J > J
w = w̃; J = J; t = t̃
until g|n| < TOL
= −H!−1 "
ζ = 1 − θ̃1−1 · 11 , . . . , θ̃|n|
−1
· |n|,|n|
w2
θni = ζi i for all i ∈ {1, . . . , |n|}
n
end for
return the weight vector w
329 Pseudocodes
In this section we present the Fisher discriminant algorithm both in primal and
dual variables (Algorithms 8 and 9). As mentioned earlier, when computing w =
−1
ˆ ˆ = µ̂+1 − µ̂−1 for w instead.
µ̂+1 − µ̂−1 it is advantageous to solve w
Many software packages would provide numerically stable algorithms such as
Gauss-Jordan decomposition for solving systems of linear equations. Note that we
have included the estimated class probabilities PY (y) in the construction of the
offset b.
Whenever possible, the third column gives a pointer to the page of first occurrence
or definition.
A
learning algorithm 24
ERM empirical risk minimization algorithm 26
structural risk minimization algorithm 29
ε bound minimization learning algorithm 136
Í on-line learning algorithm 182
α ∈ Êm linear expansion coefficients of the weight 32
vector w
332 List of Symbols
B
n Borel sets over Ên 200
τ (x) open ball of radius τ around x 217
τ (x) closed ball of radius τ around x 217
Bayes z Bayes classification strategy 80
Bayes H (z) generalized Bayes classification 168
strategy
C
C ∈ Ê2×2 cost matrix 22
Cov (X, Y) covariance between X and Y 203
Cov (X, Y) covariance matrix for X and Y 203
(z) compression function 176
χ model parameter(s) 65
D
δ ∈ (0, 1] confidence level
model space 79
E
EX X expectation of X 201
ε∈Ê deviation or generalization error bound 122
ei ith unit vector 223
(z) estimator for a probability measure PZ given 117
the sample z ∈ m
e (d) dyadic entropy number 143
" A (d) entropy number of A 222
333 List of Symbols
F
φi : → Ê feature on 19
φ: → feature mapping 19
⊆ Ê real-valued function space 21
fw : → Ê real-valued function 20
fz : → Ê real-valued function learned from z ∈ m 186
f∗ : → Ê optimal real-valued function 21
fˆ : → Ê real-valued function in a cover Fγ (x) 141
Fγ (x) ⊂ cover of given the sample x 141
FX distribution function 201
fθ Fisher score at θ 45
fX density of the random variable X 201
fat (γ ) fat shattering dimension at scale γ 147
ϕ A (d) inner entropy number 222
G
G ∈ Ê m×m Gram matrix 33
γi (w) geometrical margin of w at z i 50
γ z (w) geometrical margin of w on the training set z 50
γ̃i (w) functional margin of w at z i 50
γ̃ z (w) functional margin of w on the training set z 50
Gibbs z Gibbs classification strategy 81
Gibbs H (z) generalized Gibbs classification strategy 166
H
⊆ hypothesis space 19, 21
h: → hypothesis 19
hw : → binary hypothesis 20
hθ : → induced classifier for a given probability 116
model PZ|Q=θ
h∗ : → optimal hypothesis 118
334 List of Symbols
I
I indicator function 200
I Fisher information matrix 45
i index vector 38, 41
Iv,u set of index vectors for v in u 41
Id,m set of index vectors 176
K
⊆ n2 feature (kernel) space 19
k (x, x̃ ) kernel value between x, x̃ ∈ 32
K ∈ Êm×m kernel matrix 33
L
n2 ⊆ Ê n space of square summable sequences of 218
length n
L (z, h) level of h given z 136
L2 space of square integrable functions 218
(λi )i∈ sequence of eigenvalues 35
l ŷ,y loss between ŷ and y 21
l0−1 ŷ,y zero-one loss between ŷ and y 22
lC ŷ, y cost matrix loss between ŷ and y 23
(t)
lmargin margin loss of t 52
llin tˆ, y linear soft margin loss between tˆ and y 54
lquad tˆ, y quadratic soft margin loss between 54
tˆ and y
lε tˆ, t ε–insensitive loss between tˆ and t 59
l2 tˆ, t squared loss 82
(θ) likelihood of the parameter θ 75
L (z, h) luckiness of h given z 136
ln (·) natural logarithm
ld (·) logarithm to base 2
335 List of Symbols
m
⊆
(ε) ρ
n2
training sample size
Mercer space
packing number at scale ε
18
35
220
mistake bound for
A
M (z) 183
N
natural numbers
n dimension of feature space 19
N dimension of input space 38
ρA (ε) covering number at scale ε 220
(z) empirical covering number at z for binary- 285
valued functions
(m) worst case covering number for binary-valued 286
∞ (γ , x)
∞ (γ , m)
functions
empirical covering number of at scale γ
(worst case) covering number of at scale γ
141
141
ν fraction of margin errors 60
O
(·) order of a term
PX probability measure on
family of probability measures
200
214
π s swapping permutation 284
336 List of Symbols
Q
É rational numbers
θ∈ parameter vector 214
parameter space 214
Q the random variable of θ; in Remark 5.7 a 116, 170
measure such as P
θ̂ z estimator for the parameter of the probability 117
measure PZ estimated using
R
Ê real numbers
R[f] expected risk of f ∈ 22
R [h] expected risk of h ∈ 22
Rθ [h]
expected risk of h ∈ under PZ|Q=θ 116
R [, z] generalization error of given z ∈ m 25
R [, m] generalization error of for training sample 61
size m
Remp [ f, z]
empirical risk of f ∈ given z ∈ m 25
Rreg [ f, z]
regularized risk of f ∈ given z ∈ m 29
Ê (z, i) reconstruction function 176
ρ metric 216
S
sign sign function, i.e., sign (x) = 2 · Ix≥0 − 1
alphabet 41
covariance matrix 38
ς radius of sphere enclosing training data 51
337 List of Symbols
T
tr (A) trace of the matrix A 227
t ∈ Êm sample of real-valued outputs 82
U
u ∈ r string 41
u ∈ 2N vector in input space 38
Í (y, x, h) update algorithm 182
V
vx empirical probability measure 18, 203
V (z) ⊆ Ï version space 26
V (z) ⊆ version space in 26
Var (X) variance of X 202
v ∈ r string 41
v ∈ 2N vector in input space 38
ϑ VC dimension of 128
ϑ (z) empirical VC dimension of 140
W
⊂ unit hyper-sphere in Ê n 21
(z) canonical hyperplanes 52
w∈ weight vector 20
W±1 (x) hemispheres in induced by x 24
W0 (x) decision boundary induced by x 24
Wz equivalence classes of weight vectors 24
W (α) Wolfe dual 53
[f] regularization functional of f ∈ 29
338 List of Symbols
X
input space 17
x∈ m
sample of training objects 18
x ∈ input point
xi ∈ x ith training point 18
x ∈ input vector if ∈ 2N 30
(x )i ith component of x 30
x = φ (x) mapped input point x 19
X ∈ Êm×n data matrix of mapped input points 19
X ±1 (w) decision regions induced by w 24
X 0 (w) ∈ decision boundary in feature space 24
X 0 (w) ∈
decision boundary in input space 24
σ –algebra over 200
ξ vector of margin slack variables 54
Y
output space (often {−1, +1}) 17
y ∈ m sample of training outputs 18
y∈ output class
yi ∈ y class of ith training point 18
ψi : → Ê Mercer feature on 34
ψ : → Mercer feature mapping 35
Z
= × (labeled) data space 18
z ∈ m (labeled) training sample 18
z \i ∈ m−1 training sample with the ith element deleted 186
z i↔z ∈ m training sample with the ith element replaced 186
by z ∈
z [i: j ] subsequence z i , . . . , z j of z 281
Z random training sample
References
Duda, R. O., P. E. Hart, and D. G. Stork (2001). Pattern Classification and Scene
Analysis. New York: John Wiley and Sons. Second edition.
Feller, W. (1950). An Introduction To Probability Theory and Its Application, Volume 1.
New York: John Wiley and Sons.
Feller, W. (1966). An Introduction To Probability Theory and Its Application, Volume 2.
New York: John Wiley and Sons.
Fisher, R. A. (1936). The use of multiple measurements in taxonomic problems. Annals
of Eugenics 7, 179–188.
Floyd, S. and M. Warmuth (1995). Sample compression, learnability, and the Vapnik
Chervonenkis dimension. Machine Learning 27, 1–36.
Freund, Y. (1998). Self bounding learning algorithms. In Proceedings of the Annual
Conference on Computational Learning Theory, Madison, Wisconsin, pp. 247–258.
Freund, Y., Y. Mansour, and R. E. Schapire (2000). Analysis of a pseudo-Bayesian
prediction method. In Proceedings of the Conference on Information Science and
Systems.
Gardner, E. (1988). The space of interactions in neural networks. Journal of Physics
A 21, 257–270.
Gardner, E. and B. Derrida (1988). Optimal storage properties of neural network models.
Journal of Physics A 21, 271–284.
Gentile, C. and M. K. Warmuth (1999). Linear hinge loss and average margin. In
M. S. Kearns, S. A. Solla, and D. A. Cohn (Eds.), Advances in Neural Information
Processing Systems 11, Cambridge, MA, pp. 225–231. MIT Press.
Gibbs, M. and D. J. C. Mackay (1997). Efficient implementation of Gaussian processes.
Technical report, Cavendish Laboratory, Cambridge, UK.
Girosi, F. (1998). An equivalence between sparse approximation and support vector
machines. Neural Computation 10(6), 1455–1480.
Glivenko, V. (1933). Sulla determinazione empirica delle leggi di probabilita. Giornale
dell’Istituta Italiano degli Attuari 4, 92.
Golub, G. H. and C. F. van Loan (1989). Matrix Computations. John Hopkins University
Press.
Graepel, T., R. Herbrich, and J. Shawe-Taylor (2000). Generalisation error bounds for
sparse linear classifiers. In Proceedings of the Annual Conference on Computational
Learning Theory, pp. 298–303.
344 References
Herbrich, R., T. Graepel, and C. Campbell (2001). Bayes point machines. Journal of
Machine Learning Research 1, 245–279.
Herbrich, R., T. Graepel, and J. Shawe-Taylor (2000). Sparsity vs. large margins for lin-
ear classifiers. In Proceedings of the Annual Conference on Computational Learning
Theory, pp. 304–308.
Hoeffding, W. (1963). Probability inequalities for sums of bounded random variables.
Journal of the American Statistical Association 58, 13–30.
Jaakkola, T., M. Meila, and T. Jebara (2000). Maximum entropy discrimination. In
S. A. Solla, T. K. Leen, and K.-R. Müller (Eds.), Advances in Neural Information
Processing Systems 12, Cambridge, MA, pp. 470–476. MIT Press.
Jaakkola, T. S., M. Diekhans, and D. Haussler (1999). Using the fisher kernel method
to detect remote protein homologies. In Proccedings of the International Conference
on Intelligence Systems for Molecular Biology, pp. 149–158. AAAI Press.
Jaakkola, T. S. and D. Haussler (1999a). Exploiting generative models in discriminative
classifiers. In M. S. Kearns, S. A. Solla, and D. A. Cohn (Eds.), Advances in Neural
Information Processing Systems 11, Cambridge, MA, pp. 487–493. MIT Press.
Jaakkola, T. S. and D. Haussler (1999b). Probabilistic kernel regression models. In
Proceedings of the 1999 Conference on AI and Statistics.
Jaynes, E. T. (1968, September). Prior probabilities. IEEE Transactions on Systems
Science and Cybernetics SSC-4(3), 227–241.
Jebara, T. and T. Jaakkola (2000). Feature selection and dualities in maximum entropy
discrimination. In Uncertainity In Artificial Intelligence.
Jeffreys, H. (1946). An invariant form for the prior probability in estimation problems.
Proceedings of the Royal Statistical Society A 186, 453–461.
Joachims, T. (1998). Text categorization with support vector machines: Learning with
many relevant features. In Proceedings of the European Conference on Machine
Learning, Berlin, pp. 137–142. Springer.
Joachims, T. (1999). Making large-scale SVM learning practical. In B. Schölkopf,
C. J. C. Burges, and A. J. Smola (Eds.), Advances in Kernel Methods—Support Vector
Learning, Cambridge, MA, pp. 169–184. MIT Press.
Johnson, N. L., S. Kotz, and N. Balakrishnan (1994). Continuous Univariate Distribu-
tions. Volume 1 (Second Edition). John Wiley and Sons.
Kahane, J. P. (1968). Some Random Series of Functions. Cambridge University Press.
346 References
Karchin, R. (2000). Classifying g-protein coupled receptors with support vector ma-
chines. Master’s thesis, University of California.
Kearns, M. and D. Ron (1999). Algorithmic stability and sanity-check bounds for leave-
one-out cross-validation. Neural Computation 11(6), 1427–1453.
Kearns, M. J. and R. E. Schapire (1994). Efficient distribution-free learning of proba-
bilistic concepts. Journal of Computer and System Sciences 48(3), 464–497.
Kearns, M. J., R. E. Schapire, and L. M. Sellie (1992). Toward efficient agnostic learning
(extended abstract). In Proceedings of the Annual Conference on Computational
Learning Theory, Pittsburgh, Pennsylvania, pp. 341–352. ACM Press.
Kearns, M. J. and U. V. Vazirani (1994). An Introduction to Computational Learning
Theory. Cambridge, Massachusetts: MIT Press.
Keerthi, S. S., S. K. Shevade, C. Bhattacharyya, and K. R. K. Murthy (1999b). A fast
iterative nearest point algorithm for support vector machine classifier design. Tech-
nical Report Technical Report TR-ISL-99-03, Indian Institute of Science, Bangalore.
http://guppy.mpe.nus.edu.sg/∼mpessk/npa_tr.ps.gz.
Keerthi, S. S., S. K. Shevade, C. Bhattacharyya, and K. R. K. Murthy (1999a). Im-
provements to Platt’s SMO algorithm for SVM classifier design. Technical Report
CD-99-14, Dept. of Mechanical and Production Engineering, Natl. Univ. Singapore,
Singapore.
Kiefer, J. (1977). Conditional confidence statements and confidence estimators. Journal
of the American Statistical Association 72, 789–807.
Kimeldorf, G. S. and G. Wahba (1970). A correspondence between Bayesian estimation
on stochastic processes and smoothing by splines. Annals of Mathematical Statis-
tics 41, 495–502.
Kivinen, J., M. K. Warmuth, and P. Auer (1997). The perceptron learning algorithm
vs. winnow: Linear vs. logarithmic mistake bounds when few input variables are
relevant. Artificial Intelligence 97(1–2), 325–343.
Kockelkorn, U. (2000). Lineare statistische Methoden. Oldenburg-Verlag.
Kolmogorov, A. (1933). Sulla determinazione empirica di una leggi di distribuzione.
Giornale dell’Istituta Italiano degli Attuari 4, 33.
Kolmogorov, A. N. and S. V. Fomin (1957). Functional Analysis. Graylock Press.
Kolmogorov, A. N. and V. M. Tihomirov (1961). ε-entropy and ε-capacity of sets in
functional spaces. American Mathematical Society Translations, Series 2 17(2), 277–
364.
347 References
Platt, J. C., N. Cristianini, and J. Shawe-Taylor (2000). Large margin DAGs for mul-
ticlass classification. In S. A. Solla, T. K. Leen, and K.-R. Müller (Eds.), Advances
in Neural Information Processing Systems 12, Cambridge, MA, pp. 547–553. MIT
Press.
Poggio, T. (1975). On optimal nonlinear associative recall. Biological Cybernetics 19,
201–209.
Pollard, D. (1984). Convergence of Stochastic Processess. New York: Springer.
Press, W. H., S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery (1992). Numerical
Recipes in C: The Art of Scientific Computing (2nd ed.). Cambridge: Cambridge
University Press. ISBN 0-521-43108-5.
Robert, C. P. (1994). The Bayesian choice: A decision theoretic motivation. Ney York:
Springer.
Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage
and organization in the brain. Psychological Review 65(6), 386–408.
Rosenblatt, F. (1962). Principles of neurodynamics: Perceptron and Theory of Brain
Mechanisms. Washington D.C.: Spartan-Books.
Roth, V. and V. Steinhage (2000). Nonlinear discriminant analysis using kernel func-
tions. In S. A. Solla, T. K. Leen, and K.-R. Müller (Eds.), Advances in Neural Infor-
mation Processing Systems 12, Cambridge, MA, pp. 568–574. MIT Press.
Ruján, P. (1993). A fast method for calculating the perceptron with maximal stability.
Journal de Physique I France 3, 277–290.
Ruján, P. (1997). Playing billiards in version space. Neural Computation 9, 99–122.
Ruján, P. and M. Marchand (2000). Computing the Bayes kernel classifier. In A. J.
Smola, P. L. Bartlett, B. Schölkopf, and D. Schuurmans (Eds.), Advances in Large
Margin Classifiers, Cambridge, MA, pp. 329–347. MIT Press.
Rumelhart, D. E., G. E. Hinton, and R. J. Williams (1986). Parallel Distributed Pro-
cessing. Cambridge, MA: MIT Press.
Rychetsky, M., J. Shawe-Taylor, and M. Glesner (2000). Direct Bayes point machines.
In Proceedings of the International Conference on Machine Learning.
Salton, G. (1968). Automatic Information Organization and Retrieval. New York:
McGraw-Hill.
Sauer, N. (1972). On the density of families of sets. Journal of Combinatorial The-
ory 13, 145–147.
351 References
diagonal matrix, 223 evidence, 77, 86, 87, 94, 169, 194, 271, 274
diagonal penalization, 56, 58, 107 evidence maximization, 78, 78, 86, 93, 97, 110
directed acyclic graph, 59 expectation, 202, 203
discriminant analysis, 111 expectation value, 202, 301
discriminative model, 103, 104, 106, 116 expected loss, 12, 22
distribution expected risk, 22, 61
Bernoulli distribution, 207 exponential distribution, 208
Beta distribution, 77, 208 exponential family, 46, 104, 207, 214
binomial distribution, 77, 207
exponential distribution, 208 fat shattering bound, 149, 154
Gamma distribution, 97, 208 fat shattering bound for linear classifiers, 150
Gaussian distribution, 208 fat shattering dimension, 147, 148, 161, 296
normal distribution, 208 feature, 19, 20, 32, 35, 38, 39, 45
Poisson distribution, 207 feature mapping, 10, 19, 51, 82, 85, 131
predictive distribution, 87, 90, 94, 98 feature selection, 32
uniform distribution, 207 feature space, 19, 34, 174
distribution function, 201 Fisher discriminant, 105, 106, 107, 111, 329
DNA sequence, 41 Fisher information matrix, 45, 46
dual space, 32 Fisher kernel, 44, 46, 69
dyadic entropy number, 143, 148 Fisher score, 45
Fisher score mapping, 45
effective complexity, 138, 157 function learning, 5
effective VC dimension, 151 functional margin, 50, 50, 52, 56, 60, 66, 69, 141,
eigenfunction, 34 153
eigenvalue, 34, 84, 219, 231
eigenvector, 219, 231 Gamma distribution, 97, 208
EM algorithm, 7 Gamma function, 208
empirical covering number, 142, 146 Gauss-Jordan decomposition, 325
empirical measure, 18, 104, 122, 203 Gaussian, 36, 37, 104, 210
empirical risk, 25 Gaussian distribution, 208
empirical risk minimization, 25, 26, 28, 29, 32, 68, Gaussian process, 36, 69, 81, 84, 87, 91, 110, 265,
127, 129, 132, 133, 140, 158, 180, 189 270, 325
empirical VC dimension, 139, 292 generalization error, 25, 64, 118, 121, 123, 125,
empirical VC dimension luckiness, 139 160, 176, 185, 189
entropy, 112 generalization error bound, 12, 157, 179, 188, 190
entropy number, 222 generalized inner product, 217
dyadic entropy number, 143, 148 generalized Rayleigh coefficient, 106, 235
equivalence classes, 24, 136, 141, 285, 290, 292 generative model, 103, 104, 116, 120
ERM algorithm, 121, 123 geometrical margin, 51, 52, 56, 57, 60, 69, 135, 151
error geostatistics, 110
generalization error, 25, 64, 118, 121, 123, 125, ghost sample, 124, 137, 146, 181
160, 176, 185, 189, 193 Gibbs classification strategy, 81, 164, 165, 167, 194
leave-one-out error, 61, 71 Gibbs-Bayes lemma, 168, 170
margin error, 60, 71 Glivenko-Cantelli classes, 147, 159
training error, 25 Glivenko-Cantelli lemma, 159
error bar, 85, 86 Gram matrix, 33, 34, 43, 53, 56, 62, 64, 84, 85, 91,
estimation, 3 95, 108, 111, 259, 266, 270, 327
estimator, 117 Green’s function, 69
Euclidean distance, 9, 57 growth function, 127, 128, 140, 160, 164, 287
Euclidean inner product, 218 growth function bound, 128
Euler’s inequality, 240, 242 guaranteed risk, 29
360 Index
linear soft margin loss, 54, 59, 187, 260 Markov’s inequality, 243, 244, 245, 302
Lipschitz constant, 187, 192 martingale, 250
Lipschitz continuity, 187, 190, 315, 316 martingale difference sequence, 251
logistic regression, 5 matrix
LOOM, 64 cost matrix, 22, 56
LOQO, 323 covariance matrix, 89, 203, 278
loss derivative of a matrix, 237
ε–insensitive loss, 59, 187, 190 determinant of a matrix, 225
clipped linear soft margin loss, 191 diagonal matrix, 223
cost matrix loss, 23 Fisher information matrix, 45, 46
expected loss, 12, 22 Gram matrix, 33, 34, 43, 53, 56, 62, 64, 84, 85,
hinge loss, 55, 63, 65, 70 91, 95, 108, 111, 259, 266, 270, 327
linear soft margin loss, 54, 59, 187, 260 Hessian matrix, 268, 270, 276
margin loss, 52, 66, 91, 130 identity matrix, 223
quadratic soft margin loss, 54, 55, 85, 261 inverse of a matrix, 84, 228
sigmoidal loss, 91 Jacobian matrix, 307
squared loss, 82 kernel matrix, 33, 33
zero-one loss, 22 Kronecker product, 236
loss function, 21 lower triangular matrix, 223
lossy compression bound, 181, 194 non-singular matrix, 224
lossy compression scheme, 180 orthogonal matrix, 224
lower triangular matrix, 223 parameter matrix, 223
luckiness, 136, 138, 160, 292 partitioned inverse of a matrix, 230, 266, 269
empirical VC dimension luckiness, 139 positive definite matrix, 224
PAC luckiness, 139 positive semidefinite matrix, 34, 224
vanilla luckiness, 140 rank of a matrix, 33, 224
luckiness bound, 137 singular matrix, 224
luckiness framework, 135, 160, 175, 181, 193, 281, square matrix, 223
289 symmetric matrix, 223
luckiness generalization error bound, 135 trace of a matrix, 227
transpose of a matrix, 224
machine learning, 1, 2 triangular matrix, 223
Mahalanobis kernel, 37, 86 upper triangular matrix, 223
MAP, 80, 107, 164, 165 maximal stability perceptron, 70
margin, 49, 54, 64, 140, 141, 160, 173, 176, 303 maximum entropy, 112
functional margin, 50, 50, 52, 56, 60, 66, 69, 141, maximum likelihood, 107, 158, 278
153 maximum-a-posteriori, 29, 80, 97, 107
geometrical margin, 51, 52, 56, 57, 60, 69, 135, McDiarmid’s inequality, 185, 188, 252, 319
151 MCMC, 100
large margin, 51, 54, 69 mean field approximation, 110
large margin principle, 49, 51 measurability, 201
normalized margin, 173 measurable space, 200
soft margin, 49, 63, 70, 156 measure, 112
margin distribution, 298 conditional measure, 103
margin distribution lemma, 154 conditional probability measure, 84, 202
margin error, 60, 71 empirical measure, 18, 104, 122, 203
margin loss, 52, 66, 91, 130 marginal probabiity measure, 202
marginal probabiity measure, 202 marginalized prior measure, 97
marginalized prior measure, 97 posterior measure, 74, 83, 110, 112, 164, 165
Markov chain, 90, 100, 110 prior measure, 74, 76, 82, 92, 93, 97, 110
Markov Chain Monte Carlo, 100, 111 probability measure, 200
362 Index
training error, 25
training sample, 3, 18
transpose of a matrix, 224
triangle inequality, 120, 121, 186, 216, 252, 319
triangular matrix, 223
truncation operator, 148
weight vector, 20, 26, 49, 50, 53, 56, 82, 84, 91, 92,
94, 95, 97, 105, 108, 149, 177, 182, 271, 321
well-posed problem, 240
Wolfe dual, 53, 54, 56, 260, 261–263
Woodbury formula, 83, 229, 234, 269, 272
words, 42
zero-one loss, 22