Support Vector Machines Succinctly

Download as pdf or txt
Download as pdf or txt
You are on page 1of 116

By

Alexandre Kowalczyk

Foreword by Daniel Jebaraj


Copyright © 2017 by Syncfusion, Inc.

2501 Aerial Center Parkway


Suite 200
Morrisville, NC 27560
USA
All rights reserved.

Important licensing information. Please read.

This book is available for free download from www.syncfusion.com on completion of a


registration form.

If you obtained this book from any other source, please register and download a free copy from
www.syncfusion.com.

This book is licensed for reading only if obtained from www.syncfusion.com.

This book is licensed strictly for personal or educational use.

Redistribution in any form is prohibited.

The authors and copyright holders provide absolutely no warranty for any information provided.

The authors and copyright holders shall not be liable for any claim, damages, or any other
liability arising from, out of, or in connection with the information in this book.

Please do not use this book if the listed terms are unacceptable.

Use shall constitute acceptance of the terms listed.

SYNCFUSION, SUCCINCTLY, DELIVER INNOVATION WITH EASE, ESSENTIAL, and .NET


ESSENTIALS are the registered trademarks of Syncfusion, Inc.

Technical Reviewer: James McCaffrey


Copy Editor: Courtney Wright
Acquisitions Coordinator: Hillary Bowling, online marketing manager, Syncfusion, Inc.
Proofreader: John Elderkin

3
The World's Best 
 4.6 out of

5 stars

UI Component Suite 


for Building

Powerful Apps

SHOPMART Search for something... Filters John Watson

Dashboard Revenue by Product Cate ories g

Laptop: 56%
Orders
Online Orders offline Orders Total users
Products 23456 345 945 65 9789 95

January 2022 Customers Sales

Analytics
Sales Overview Monthly
S M T W T F S
Message
26 27 28 29 30 31 1
Accessories: 19% Mobile: 25%
2 3 4 5 6 7 8 $51,456
OTHER
9 10 11 12 13 14 15 Laptop Mobile Accessories
16 17 18 19 20 21 22 Users
23 24 25 26 27 28 29 Top Sale Products
Teams Cash
30 31 1 2 3 4 5
Setting Apple iPhone 13 Pro $999.00 $1500
Order Delivery Stats
Mobile +12.8%
100K
Completed
120 Apple Macbook Pro $1299.00 50K

In Progress
Invoices New Invoice Laptop +32.8%
25K
24
Order id Date Client name Amount Status Galaxy S22 Ultra $499.99 0
Mobile +22.8% 10 May 11 May 12 May Today
Log Out #1208 Jan 21, 2022 Olive Yew $1,534.00 Completed

Dell Inspiron 55 $899.00

Get your Free .NET and JavaScript UI Components

syncfusion.com/communitylicense

1,700+ components for Support within 24 hours Uncompromising

mobile, web, and on all business days quality

desktop platforms

20+ years in

Hassle-free licensing 28000+ customers


business

Trusted by the world's leading companies


Table of Contents

The Story behind the Succinctly Series of Books ................................................................. 8

About the Author ....................................................................................................................10

Preface .....................................................................................................................................11

Introduction .............................................................................................................................12

Chapter 1 Prerequisites .........................................................................................................13

Vectors .................................................................................................................................13

What is a vector? .............................................................................................................13

The dot product................................................................................................................17

Understanding linear separability .........................................................................................21

Linearly separable data....................................................................................................21

Hyperplanes .........................................................................................................................24

What is a hyperplane? .....................................................................................................24

Understanding the hyperplane equation ..........................................................................25

Classifying data with a hyperplane ...................................................................................26

How can we find a hyperplane (separating the data or not)?............................................27

Summary ..............................................................................................................................28

Chapter 2 The Perceptron .....................................................................................................29

Presentation .........................................................................................................................29

The Perceptron learning algorithm........................................................................................29

Understanding the update rule .........................................................................................31

Convergence of the algorithm ..........................................................................................35

Understanding the limitations of the PLA .........................................................................35

Summary ..............................................................................................................................38

4
Chapter 3 The SVM Optimization Problem ...........................................................................39

SVMs search for the optimal hyperplane ..............................................................................39

How can we compare two hyperplanes? ..............................................................................39

Using the equation of the hyperplane ...............................................................................39

Problem with examples on the negative side ...................................................................41

Does the hyperplane correctly classify the data? .............................................................42

Scale invariance ..............................................................................................................43

What is an optimization problem? .........................................................................................48

Unconstrained optimization problem ................................................................................48

Constrained optimization problem ....................................................................................49

How do we solve an optimization problem? .....................................................................51

The SVMs optimization problem ...........................................................................................51

Summary ..............................................................................................................................53

Chapter 4 Solving the Optimization Problem .......................................................................54

Lagrange multipliers .............................................................................................................54

The method of Lagrange multipliers .................................................................................54

The SVM Lagrangian problem .........................................................................................54

The Wolfe dual problem .......................................................................................................55

Karush-Kuhn-Tucker conditions ...........................................................................................57

Stationarity condition .......................................................................................................58

Primal feasibility condition ................................................................................................58

Dual feasibility condition ..................................................................................................58

Complementary slackness condition ................................................................................59

What to do once we have the multipliers? ............................................................................59

Compute w ......................................................................................................................59

Compute b .......................................................................................................................59

5
Hypothesis function .........................................................................................................60

Solving SVMs with a QP solver ............................................................................................60

Summary ..............................................................................................................................64

Chapter 5 Soft Margin SVM ...................................................................................................65

Dealing with noisy data.........................................................................................................65

Outlier reducing the margin ..............................................................................................65

Outlier breaking linear separability ...................................................................................65

Soft margin to the rescue .....................................................................................................66

Slack variables ................................................................................................................66

Understanding what C does .................................................................................................68

How to find the best C? ........................................................................................................70

Other soft-margin formulations .............................................................................................70

2-Norm soft margin ..........................................................................................................70

nu-SVM ...........................................................................................................................70

Summary ..............................................................................................................................71

Chapter 6 Kernels ..................................................................................................................72

Feature transformations .......................................................................................................72

Can we classify non-linearly separable data? ..................................................................72

How do we know which transformation to apply? .............................................................74

What is a kernel?..................................................................................................................74

The kernel trick .....................................................................................................................75

Kernel types .........................................................................................................................76

Linear kernel ....................................................................................................................76

Polynomial kernel ............................................................................................................76

RBF or Gaussian kernel...................................................................................................78

Other types ......................................................................................................................80

6
Which kernel should I use? ..................................................................................................81

Summary ..............................................................................................................................81

Chapter 7 The SMO Algorithm ..............................................................................................82

The idea behind SMO...........................................................................................................83

How did we get to SMO? ......................................................................................................83

Why is SMO faster? .............................................................................................................83

The SMO algorithm ..............................................................................................................83

The analytical solution .....................................................................................................84

Understanding the first heuristic .......................................................................................85

Understanding the second heuristic .................................................................................86

Summary ..............................................................................................................................88

Chapter 8 Multi-Class SVMs ..................................................................................................89

Solving multiple binary problems ..........................................................................................89

One-against-all ................................................................................................................90

One-against-one ..............................................................................................................94

DAGSVM .........................................................................................................................96

Solving a single optimization problem...................................................................................98

Vapnik, Weston, and Watkins ..........................................................................................99

Crammer and Singer .......................................................................................................99

Which approach should you use? .......................................................................................100

Summary ............................................................................................................................102

Conclusion ............................................................................................................................103

Appendix A: Datasets ...........................................................................................................104

Linearly separable dataset .................................................................................................104

Appendix B: The SMO Algorithm .........................................................................................106

Bibliography ..........................................................................................................................113

7
The Story behind the Succinctly Series
of Books

Daniel Jebaraj, Vice President


Syncfusion, Inc.

S
taying on the cutting edge
As many of you may know, Syncfusion is a provider of software components for the
Microsoft platform. This puts us in the exciting but challenging position of always
being on the cutting edge.

Whenever platforms or tools are shipping out of Microsoft, which seems to be about every other
week these days, we have to educate ourselves, quickly.

Information is plentiful but harder to digest


In reality, this translates into a lot of book orders, blog searches, and Twitter scans.

While more information is becoming available on the Internet and more and more books are
being published, even on topics that are relatively new, one aspect that continues to inhibit us is
the inability to find concise technology overview books.

We are usually faced with two options: read several 500+ page books or scour the web for
relevant blog posts and other articles. Just as everyone else who has a job to do and customers
to serve, we find this quite frustrating.

The Succinctly series


This frustration translated into a deep desire to produce a series of concise technical books that
would be targeted at developers working on the Microsoft platform.

We firmly believe, given the background knowledge such developers have, that most topics can
be translated into books that are between 50 and 100 pages.

This is exactly what we resolved to accomplish with the Succinctly series. Isn’t everything
wonderful born out of a deep desire to change things for the better?

The best authors, the best content


Each author was carefully chosen from a pool of talented experts who shared our vision. The
book you now hold in your hands, and the others available in this series, are a result of the
authors’ tireless work. You will find original content that is guaranteed to get you up and running
in about the time it takes to drink a few cups of coffee.

8
Free forever
Syncfusion will be working to produce books on several topics. The books will always be free.
Any updates we publish will also be free.

Free? What is the catch?


There is no catch here. Syncfusion has a vested interest in this effort.

As a component vendor, our unique claim has always been that we offer deeper and broader
frameworks than anyone else on the market. Developer education greatly helps us market and
sell against competing vendors who promise to “enable AJAX support with one click,” or “turn
the moon to cheese!”

Let us know what you think


If you have any topics of interest, thoughts, or feedback, please feel free to send them to us at
[email protected].

We sincerely hope you enjoy reading this book and that it helps you better understand the topic
of study. Thank you for reading.

Please follow us on Twitter and “Like” us on Facebook to help us spread the


word about the Succinctly series!

9
About the Author

Alexandre Kowalczyk is a software developer at ABC Arbitrage, a financial company doing


automated trading on the stock market, and a certified Microsoft Specialist in C#.

Alexandre first encountered Support Vector Machines (SVMs) while attending the Andrew Ng
online course on Machine Learning three years ago. Since then, he has successfully used
SVMs on several projects, including real-time news classification.

In his spare time, he participates in Kaggle contests. He has used SVM implementations in C#,
R, and Python to classify plankton images, Greek news, and products into categories, and to
predict physical and chemical properties of soil using spectral measurements.

Alexandre has spent two years studying SVMs, allowing him to understand how they work.
Because it was difficult to find a simple overview of the subject, he started the blog SVM
Tutorial, where he explains SVMs as simply as he can.

He hopes this book will help you understand SVMs and provide you with another tool in your
machine-learning toolbox.

Acknowledgments
I would like to thank Syncfusion for providing me the opportunity to write this book, Grégory
Godin for taking the time to read and review it, and James McCaffrey for his in-depth technical
review.

Dedication
I dedicate this book to my mother, Claudine Kowalczyk (1954–2003).

10
Preface

Who is this book for?


This book’s aim is to provide a general overview of Support Vector Machines (SVMs). You will
learn what they are, which kinds of problems they can solve, and how to use them. I tried to
make this book useful for many categories of readers. Software engineers will find a lot of code
examples alongside simple explanations of the algorithms. A deeper understanding of how
SVMs work internally will enable you to make better use of the available implementations.

Students looking to take a first look at SVMs will find a large enough coverage of the subject to
spike their curiosity. I also tried to include as many references as I could so that the interested
reader can dive deeper.

How should you read this book?


Because each chapter is built on the previous one, reading this book sequentially is the
preferred method.

References
You will find a bibliography at the end of the book. A reference to a paper or book is made with
the name of the author followed by the publication date. For instance, (Bishop, 2006) refers to
the following line in the bibliography:

Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.

Code listings
The code listings in this book have been created using the Pycharm IDE, Community Edition
2016.2.3, and executed with WinPython 64-bit 3.5.1.2 version of Python and NumPy.
You can find the code source associated with this book in this Bitbucket.

11
Introduction

Support Vector Machine is one of the most performant off-the-shelf supervised machine
learning algorithms. This means that when you have a problem and you try to run a SVM on it,
you will often get pretty good results without many tweaks. Despite this, because it is based on
a strong mathematical background, it is often seen as a black box. In this book, we will go under
the hood and look at the main ideas behind SVM. There are several Support Vector Machines,
which is why I will often refer to SVMs. The goal of this book is to understand how they work.

SVMs are the result of the work of several people over many years. The first SVM algorithm is
attributed to Vladimir Vapnik in 1963. He later worked closely with Alexey Chervonenkis on what
is known as the VC theory, which attempts to explain the learning process from a statistical
point of view, and they both contributed greatly to the SVM. You can find a very detailed history
of SVMs here.

In real life, SVMs have been successfully used in three main areas: text categorization, image
recognition, and bioinformatics (Cristianini & Shawe-Taylor, 2000). Specific examples include
classifying news stories, handwritten digit recognition, and cancer tissue samples.

In the first chapter, we will consider important concepts: vectors, linear separability, and
hyperplanes. They are the building blocks that will allow you to understand SVMs. In Chapter 2,
instead of jumping right into the subject, we will study a simple algorithm known as the
Perceptron. Do not skip it—even though it does not discuss SVMs, this chapter will give you
precious insight into why SVMs are better at classifying data.

Chapter 3 will be used to step-by-step construct what is known as the SVM optimization
problem. Chapter 4, which is probably the hardest, will show you how to solve this problem—
first mathematically, then programmatically. In Chapter 5, we will discover a new support vector
machine known as the Soft-margin SVM. We will see how it is a crucial improvement to the
original problem.

Chapter 6 will introduce kernels and will explain the so called “kernel trick.” With this trick, we
will get the kernelized SVM, which is the most-used nowadays. In Chapter 7, we will learn about
SMO, an algorithm specifically created to quickly solve the SVM optimization problem. In
Chapter 8, we will see that SVMs can be used to classify more than one class.

Every chapter contains code samples and figures so that you can understand the concepts
more easily. Of course, this book cannot cover every subject, and some of them will not be
presented. In the conclusion, you will find pointers toward what you can learn next about SVMs.

Let us now begin our journey.

12
Chapter 1 Prerequisites

This chapter introduces some basics you need to know in order to understand SVMs better. We
will first see what vectors are and look at some of their key properties. Then we will learn what it
means for data to be linearly separable before introducing a key component: the hyperplane.

Vectors
In Support Vector Machine, there is the word vector. It is important to know some basics about
vectors in order to understand SVMs and how to use them.

What is a vector?
A vector is a mathematical object that can be represented by an arrow (Figure 1).

Figure 1: Representation of a vector

When we do calculations, we denote a vector with the coordinates of its endpoint (the point
where the tip of the arrow is). In Figure 1, the point A has the coordinates (4,3). We can write:

If we want to, we can give another name to the vector, for instance, .

From this point, one might be tempted to think that a vector is defined by its coordinates.
However, if I give you a sheet of paper with only a horizontal line and ask you to trace the same
vector as the one in Figure 1, you can still do it.

13
You need only two pieces of information:

• What is the length of the vector?


• What is the angle between the vector and the horizontal line?

This leads us to the following definition of a vector:

A vector is an object that has both a magnitude and a direction.

Let us take a closer look at each of these components.

The magnitude of a vector


The magnitude, or length, of a vector is written , and is called its norm.

Figure 2: The magnitude of this vector is the length of the segment OA

In Figure 2, we can calculate the norm of vector by using the Pythagorean theorem:

In general, we compute the norm of a vector by using the Euclidean norm


formula:

14
In Python, computing the norm can easily be done by calling the norm function provided by the
numpy module, as shown in Code Listing 1.

Code Listing 1

import numpy as np

x = [3,4]
np.linalg.norm(x) # 5.0

The direction of a vector


The direction is the second component of a vector. By definition, it is a new vector for which the
coordinates are the initial coordinates of our vector divided by its norm.

The direction of a vector is the vector:

It can be computed in Python using the code in Code Listing 2.

Code Listing 2

import numpy as np

# Compute the direction of a vector x.


def direction(x):
return x/np.linalg.norm(x)

Where does it come from? Geometry. Figure 3 shows us a vector and its angles with respect
to the horizontal and vertical axis. There is an angle (theta) between and the horizontal axis,
and there is an angle (alpha) between and the vertical axis.

15
Figure 3: A vector u and its angles with respect to the axis

Using elementary geometry, we see that and , which means that can
also be defined by:

The coordinates of are defined by cosines. As a result, if the angle between and an axis
changes, which means the direction of changes, will also change. That is why we call this
vector the direction of vector . We can compute the value of (Code Listing 3), and we find
that its coordinates are .

Code Listing 3

u = np.array([3,4])
w = direction(u)

print(w) # [0.6 , 0.8]

It is interesting to note is that if two vectors have the same direction, they will have the same
direction vector (Code Listing 4).

Code Listing 4

u_1 = np.array([3,4])
u_2 = np.array([30,40])

print(direction(u_1)) # [0.6 , 0.8]


print(direction(u_2)) # [0.6 , 0.8]

16
Moreover, the norm of a direction vector is always 1. We can verify that with the vector
(Code Listing 5).

Code Listing 5

np.linalg.norm(np.array([0.6, 0.8])) # 1.0

It makes sense, as the sole objective of this vector is to describe the direction of other vectors—
by having a norm of 1, it stays as simple as possible. As a result, a direction vector such as is
often referred to as a unit vector.

Dimensions of a vector
Note that the order in which the numbers are written is important. As a result, we say that a -
dimensional vector is a tuple of real-valued numbers.

For instance, is a two-dimensional vector; we often write ( belongs to ).


Similarly, the vector is a three-dimensional vector, and .

The dot product


The dot product is an operation performed on two vectors that returns a number. A number is
sometimes called a scalar; that is why the dot product is also called a scalar product.

People often have trouble with the dot product because it seems to come out of nowhere. What
is important is that it is an operation performed on two vectors and that its result gives us some
insights into how the two vectors relate to each other. There are two ways to think about the dot
product: geometrically and algebraically.

Geometric definition of the dot product


Geometrically, the dot product is the product of the Euclidean magnitudes of the two vectors
and the cosine of the angle between them.

17
Figure 4: Two vectors x and y

This means that if we have two vectors, and , with an angle between them (Figure 4), their
dot product is:

By looking at this formula, we can see that the dot product is strongly influenced by the angle :

• When , we have and

• When , we have and

• When , we have and

Keep this in mind—it will be useful later when we study the Perceptron learning algorithm.

We can write a simple Python function to compute the dot product using this definition (Code
Listing 6) and use it to get the value of the dot product in Figure 4 (Code Listing 7).

Code Listing 6

import math
import numpy as np

def geometric_dot_product(x,y, theta):


x_norm = np.linalg.norm(x)
y_norm = np.linalg.norm(y)
return x_norm * y_norm * math.cos(math.radians(theta))

However, we need to know the value of to be able to compute the dot product.

18
Code Listing 7

theta = 45
x = [3,5]
y = [8,2]

print(geometric_dot_product(x,y,theta)) # 34.0

Algebraic definition of the dot product

Figure 5: Using these three angles will allow us to simplify the dot product

In Figure 5, we can see the relationship between the three angles , (beta), and (alpha):

This means computing is the same as computing .

Using the difference identity for cosine we get:

19
If we multiply both sides by we get:

We already know that:

This means the dot product can also be written:

Or:

In a more general way, for -dimensional vectors, we can write:

This formula is the algebraic definition of the dot product.

Code Listing 8

def dot_product(x,y):
result = 0
for i in range(len(x)):
result = result + x[i]*y[i]
return result

This definition is advantageous because we do not have to know the angle to compute the dot
product. We can write a function to compute its value (Code Listing 8) and get the same result
as with the geometric definition (Code Listing 9).

Code Listing 9

x = [3,5]
y = [8,2]
print(dot_product(x,y)) # 34

Of course, we can also use the dot function provided by numpy (Code Listing 10).

20
Code Listing 10

import numpy as np

x = np.array([3,5])
y = np.array([8,2])

print(np.dot(x,y)) # 34

We spent quite some time understanding what the dot product is and how it is computed. This is
because the dot product is a fundamental notion that you should be comfortable with in order to
figure out what is going on in SVMs. We will now see another crucial aspect, linear separability.

Understanding linear separability


In this section, we will use a simple example to introduce linear separability.

Linearly separable data


Imagine you are a wine producer. You sell wine coming from two different production batches:

• One high-end wine costing $145 a bottle.


• One common wine costing $8 a bottle.

Recently, you started to receive complaints from clients who bought an expensive bottle. They
claim that their bottle contains the cheap wine. This results in a major reputation loss for your
company, and customers stop ordering your wine.

Using alcohol-by-volume to classify wine


You decide to find a way to distinguish the two wines. You know that one of them contains more
alcohol than the other, so you open a few bottles, measure the alcohol concentration, and plot it.

21
Figure 6: An example of linearly separable data

In Figure 6, you can clearly see that the expensive wine contains less alcohol than the cheap
one. In fact, you can find a point that separates the data into two groups. This data is said to be
linearly separable. For now, you decide to measure the alcohol concentration of your wine
automatically before filling an expensive bottle. If it is greater than 13 percent, the production
chain stops and one of your employee must make an inspection. This improvement dramatically
reduces complaints, and your business is flourishing again.

This example is too easy—in reality, data seldom works like that. In fact, some scientists really
measured alcohol concentration of wine, and the plot they obtained is shown in Figure 7. This is
an example of non-linearly separable data. Even if most of the time data will not be linearly
separable, it is fundamental that you understand linear separability well. In most cases, we will
start from the linearly separable case (because it is the simpler) and then derive the non-
separable case.

Similarly, in most problems, we will not work with only one dimension, as in Figure 6. Real-life
problems are more challenging than toy examples, and some of them can have thousands of
dimensions, which makes working with them more abstract. However, its abstractness does not
make it more complex. Most examples in this book will be two-dimensional examples. They are
simple enough to be easily visualized, and we can do some basic geometry on them, which will
allow you to understand the fundamentals of SVMs.

22
Figure 7: Plotting alcohol by volume from a real dataset

In our example of Figure 6, there is only one dimension: that is, each data point is represented
by a single number. When there are more dimensions, we will use vectors to represent each
data point. Every time we add a dimension, the object we use to separate the data changes.
Indeed, while we can separate the data with a single point in Figure 6, as soon as we go into
two dimensions we need a line (a set of points), and in three dimensions we need a plane
(which is also a set of points).

To summarize, data is linearly separable when:

• In one dimension, you can find a point separating the data (Figure 6).
• In two dimensions, you can find a line separating the data (Figure 8).
• In three dimensions, you can find a plane separating the data (Figure 9).

Figure 8: Data separated by a line Figure 9: Data separated by a plane

Similarly, when data is non-linearly separable, we cannot find a separating point, line, or plane.
Figure 10 and Figure 11 show examples of non-linearly separable data in two and three
dimensions.

23
Figure 10: Non-linearly separable data in 2D Figure 11: Non-linearly separable data in 3D

Hyperplanes
What do we use to separate the data when there are more than three dimensions? We use
what is called a hyperplane.

What is a hyperplane?
In geometry, a hyperplane is a subspace of one dimension less than its ambient space.

This definition, albeit true, is not very intuitive. Instead of using it, we will try to understand what
a hyperplane is by first studying what a line is.

If you recall mathematics from school, you probably learned that a line has an equation of the
form , that the constant is known as the slope, and that intercepts the y-axis.
There are several values of for which this formula is true, and we say that the set of the
solutions is a line.

What is often confusing is that if you study the function in a calculus course, you
will be studying a function with one variable.

However, it is important to note that the linear equation has two variables,
respectively and , and we can name them as we want.

For instance, we can rename as and as , and the equation becomes: .

This is equivalent to .

If we define the two-dimensional vectors and , we obtain another notation


for the equation of a line (where is the dot product of and ):

24
What is nice with this last equation is that it uses vectors. Even if we derived it by using two-
dimensional vectors, it works for vectors of any dimensions. It is, in fact, the equation of a
hyperplane.

From this equation, we can have another insight into what a hyperplane is: it is the set of points
satisfying . And, if we keep just the essence of this definition: a hyperplane is a set
of points.

If we have been able to deduce the hyperplane equation from the equation of a line, it is
because a line is a hyperplane. You can convince yourself by reading the definition of a
hyperplane again. You will notice that, indeed, a line is a two-dimensional space surrounded by
a plane that has three dimensions. Similarly, points and planes are hyperplanes, too.

Understanding the hyperplane equation


We derived the equation of a hyperplane from the equation of a line. Doing the opposite is
interesting, as it shows us more clearly the relationship between the two.

Given vectors , and , we can define a hyperplane having the equation:

This is equivalent to:

We isolate to get:

If we define and :

We see that the bias of the line equation is only equal to the bias of the hyperplane equation
when . So you should not be surprised if is not the intersection with the vertical axis
when you see a plot for a hyperplane (this will be the case in our next example). Moreover, if
and have the same sign, the slope will be negative.

25
Classifying data with a hyperplane

Figure 12: A linearly separable dataset

Given the linearly separable data of Figure 12, we can use a hyperplane to perform binary
classification.

For instance, with the vector and we get the hyperplane in Figure 13.

Figure 13: A hyperplane separates the data

We associate each vector with a label , which can have the value or (respectively the
triangles and the stars in Figure 13).

We define a hypothesis function :

which is equivalent to:

26
It uses the position of with respect to the hyperplane to predict a value for the label . Every
data point on one side of the hyperplane will be assigned a label, and every data point on the
other side will be assigned the other label.

For instance, for , is above the hyperplane. When we do the calculation, we get
, which is positive, so .

Similarly, for , is below the hyperplane, and will return because


.

Because it uses the equation of the hyperplane, which produces a linear combination of the
values, the function , is called a linear classifier.

With one more trick, we can make the formula of even simpler by removing the b constant.
First, we add a component to the vector . We get the vector
(it reads “ hat” because we put a hat on ). Similarly, we add a component
to the vector , which becomes .

Note: In the rest of the book, we will call a vector to which we add an artificial
coordinate an augmented vector.

When we use augmented vectors, the hypothesis function becomes:

If we have a hyperplane that separates the data set like the one in Figure 13, by using the
hypothesis function , we are able to predict the label of every point perfectly.
The main question is: how do we find such a hyperplane?

How can we find a hyperplane (separating the data or not)?


Recall that the equation of the hyperplane is in augmented form. It is important to
understand that the only value that impacts the shape of the hyperplane is . To convince you,
we can come back to the two-dimensional case when a hyperplane is just a line. When we
create the augmented three-dimensional vectors, we obtain and .
You can see that the vector contains both and , which are the two main components
defining the look of the line. Changing the value of gives us different hyperplanes (lines), as
shown in Figure 14.

27
Figure 14: Different values of w will give you different hyperplanes

Summary
After introducing vectors and linear separability, we learned what a hyperplane is and how we
can use it to classify data. We then saw that the goal of a learning algorithm trying to learn a
linear classifier is to find a hyperplane separating the data. Eventually, we discovered that
finding a hyperplane is equivalent to finding a vector .

We will now examine which approaches learning algorithms use to find a hyperplane that
separates the data. Before looking at how SVMs do this, we will first look at one of the simplest
learning models: the Perceptron.

28
Chapter 2 The Perceptron

Presentation
The Perceptron is an algorithm invented in 1957 by Frank Rosenblatt, a few years before the
first SVM. It is widely known because it is the building block of a simple neural network: the
multilayer perceptron. The goal of the Perceptron is to find a hyperplane that can separate a
linearly separable data set. Once the hyperplane is found, it is used to perform binary
classification.

Given augmented vectors and , the Perceptron uses the


same hypothesis function we saw in the previous chapter to classify a data point :

The Perceptron learning algorithm


Given a training set of -dimensional training examples , the Perceptron Learning
Algorithm (PLA) tries to find a hypothesis function that predicts the label of every
correctly.

The hypothesis function of the Perceptron is , and we saw that is just the
equation of a hyperplane. We can then say that the set of hypothesis functions is the set of
dimensional hyperplanes ( because a hyperplane has one dimension less than its
ambient space).

What is important to understand here is that the only unknown value is . It means that the goal
of the algorithm is to find a value for . You find ; you have a hyperplane. There is an infinite
number of hyperplanes (you can give any value to ), so there is an infinity of hypothesis
functions.

This can be written more formally this way:

Given a training set: and a set of hypothesis functions.

Find such that for every .

This is equivalent to:

Given a training set: and a set of hypothesis functions.

Find such that for every .

29
The PLA is a very simple algorithm, and can be summarized this way:

1. Start with a random hyperplane (defined by a vector ) and use it to classify the data.
2. Pick a misclassified example and select another hyperplane by updating the value of ,
hoping it will work better at classifying this example (this is called the update rule).
3. Classify the data with this new hyperplane.
4. Repeat steps 2 and 3 until there is no misclassified example.

Once the process is over, you have a hyperplane that separates the data.
The algorithm is shown in Code Listing 11.

Code Listing 11

import numpy as np

def perceptron_learning_algorithm(X, y):


w = np.random.rand(3) # can also be initialized at zero.
misclassified_examples = predict(hypothesis, X, y, w)

while misclassified_examples.any():
x, expected_y = pick_one_from(misclassified_examples, X, y)
w = w + x * expected_y # update rule
misclassified_examples = predict(hypothesis, X, y, w)

return w

Let us look at the code in detail.

The perceptron_learning_algorithm uses several functions (Code Listing 12). The


hypothesis function is just written in Python code; as we saw before, it is the function that
returns the label predicted for an example when classifying with the hyperplane defined by
. The predict function applies the hypothesis for every example and returns the ones that
are misclassified.

Code Listing 12

def hypothesis(x, w):


return np.sign(np.dot(w, x))

# Make predictions on all data points


# and return the ones that are misclassified.
def predict(hypothesis_function, X, y, w):
predictions = np.apply_along_axis(hypothesis_function, 1, X, w)
misclassified = X[y != predictions]
return misclassified

Once we have made predictions with predict, we know which examples are misclassified, so
we use the function pick_one_from to select one of them randomly (Code Listing 13).

30
Code Listing 13

# Pick one misclassified example randomly


# and return it with its expected label.
def pick_one_from(misclassified_examples, X, y):
np.random.shuffle(misclassified_examples)
x = misclassified_examples[0]
index = np.where(np.all(X == x, axis=1))
return x, y[index]

We then arrive at the heart of the algorithm: the update rule. For now, just remember that it
changes the value of . Why it does this will be explained in detail later. We once again use the
predict function, but this time, we give it the updated . It allows us to see if we have
classified all data points correctly, or if we need to repeat the process until we do.

Code Listing 14 demonstrates how we can use the perceptron_learning_algorithm function


with a toy data set. Note that we need the and vectors to have the same dimension, so we
convert every vector into an augmented vector before giving it to the function.

Code Listing 14

# See Appendix A for more information about the dataset


from succinctly.datasets import get_dataset, linearly_separable as ls

np.random.seed(88)

X, y = get_dataset(ls.get_training_examples)

# transform X into an array of augmented vectors.


X_augmented = np.c_[np.ones(X.shape[0]), X]

w = perceptron_learning_algorithm(X_augmented, y)

print(w) # [-44.35244895 1.50714969 5.52834138]

Understanding the update rule


Why do we use this particular update rule? Recall that we picked a misclassified example at
random. Now we would like to make the Perceptron correctly classify this example. To do so,
we decide to update the vector . The idea here is simple. Since the sign of the dot product
between and is incorrect, by changing the angle between them, we can make it correct:

• If the predicted label is 1, the angle between and is smaller than , and we want to
increase it.
• If the predicted label is -1, the angle between and is bigger than , and we want to
decrease it.

31
Figure 15: Two vectors
Let’s see what happens with two vectors, and , having an angle between (Figure 15).
On the one hand, adding them creates a new vector and the angle between and
is smaller than (Figure 16).

Figure 16: The addition creates a smaller angle

On the other hand, subtracting them creates a new vector , and the angle between and
is bigger than (Figure 17).

32
Figure 17: The subtraction creates a bigger angle
We can use these two observations to adjust the angle:

• If the predicted label is 1, the angle is smaller than . We want to increase the angle,
so we set .
• If the predicted label is -1, the angle is bigger than . We want to decrease the angle,
so we set .
As we are doing this only on misclassified examples, when the predicted label has a value, the
expected label is the opposite. This means we can rewrite the previous statement:

• If the expected label is -1: We want to increase the angle, so we set .


• If the expected label is +1: We want to decrease the angle, so we set .
When translated into Python it gives us Code Listing 15, and we can see that it is strictly
equivalent to Code Listing 16, which is the update rule.
Code Listing 15

def update_rule(expected_y, w, x):


if expected_y == 1:
w = w + x
else:
w = w - x
return w

33
Code Listing 16

def update_rule(expected_y, w, x):


w = w + x * expected_y
return w

We can verify that the update rule works as we expect by checking the value of the hypothesis
before and after applying it (Code Listing 17).

Code Listing 17

import numpy as np

def hypothesis(x, w):


return np.sign(np.dot(w, x))

x = np.array([1, 2, 7])
expected_y = -1
w = np.array([4, 5, 3])

print(hypothesis(w, x)) # The predicted y is 1.

w = update_rule(expected_y, w, x) # we apply the update rule.

print(hypothesis(w, x)) # The predicted y is -1.

Note that the update rule does not necessarily change the sign of the hypothesis for the
example the first time. Sometimes it is necessary to apply the update rule several times before it
happens as shown in Code Listing 18. This is not a problem, as we are looping across
misclassified examples, so we will continue to use the update rule until the example is correctly
classified. What matters here is that each time we use the update rule, we change the value of
the angle in the right direction (increasing it or decreasing it).

Code Listing 18

import numpy as np

x = np.array([1,3])
expected_y = -1
w = np.array([5, 3])

print(hypothesis(w, x)) # The predicted y is 1.

w = update_rule(expected_y, w, x) # we apply the update rule.

print(hypothesis(w, x)) # The predicted y is 1.

w = update_rule(expected_y, w, x) # we apply the update rule once again.

34
print(hypothesis(w, x)) # The predicted y is -1.

Also note that sometimes updating the value of for a particular example changes the
hyperplane in such a way that another example previously correctly classified becomes
misclassified. So, the hypothesis might become worse at classifying after being updated. This is
illustrated in Figure 18, which shows us the number of classified examples at each iteration
step. One way to avoid this problem is to keep a record of the value of before making the
update and use the updated only if it reduces the number of misclassified examples. This
modification of the PLA is known as the Pocket algorithm (because we keep in our pocket).

Figure 18: The PLA update rule oscillates

Convergence of the algorithm


We said that we keep updating the vector with the update rule until there is no misclassified
point. But how can we be so sure that will ever happen? Luckily for us, mathematicians have
studied this problem, and we can be very sure because the Perceptron convergence theorem
guarantees that if the two sets P and N (of positive and negative examples respectively) are
linearly separable, the vector is updated only a finite number of times, which was first proved
by Novikoff in 1963 (Rojas, 1996).

Understanding the limitations of the PLA


One thing to understand about the PLA algorithm is that because weights are randomly
initialized and misclassified examples are randomly chosen, it is possible the algorithm will
return a different hyperplane each time we run it. Figure 19 shows the result of running the PLA
on the same dataset four times. As you can see, the PLA finds four different hyperplanes.

35
Figure 19: The PLA finds a different hyperplane each time

At first, this might not seem like a problem. After all, the four hyperplanes perfectly classify the
data, so they might be equally good, right? However, when using a machine learning algorithm
such as the PLA, our goal is not to find a way to classify perfectly the data we have right now.
Our goal is to find a way to correctly classify new data we will receive in the future.

Let us introduce some terminology to be clear about this. To train a model, we pick a sample of
existing data and call it the training set. We train the model, and it comes up with a hypothesis
(a hyperplane in our case). We can measure how well the hypothesis performs on the training
set: we call this the in-sample error (also called training error). Once we are satisfied with the
hypothesis, we decide to use it on unseen data (the test set) to see if it indeed learned
something. We measure how well the hypothesis performs on the test set, and we call this the
out-of-sample error (also called the generalization error).

Our goal is to have the smallest out-of-sample error.

In the case of the PLA, all hypotheses in Figure 19 perfectly classify the data: their in-sample
error is zero. But we are really concerned about their out-of-sample error. We can use a test set
such as the one in Figure 20 to check their out-of-sample errors.

36
Figure 20: A test dataset

As you can see in Figure 21, the two hypotheses on the right, despite perfectly classifying the
training dataset, are making errors with the test dataset.

Now we better understand why it is problematic. When using the Perceptron with a linearly
separable dataset, we have the guarantee of finding a hypothesis with zero in-sample error, but
we have no guarantee about how well it will generalize to unseen data (if an algorithm
generalizes well, its out-of-sample error will be close to its in-sample error). How can we choose
a hyperplane that generalizes well? As we will see in the next chapter, this is one of the goals of
SVMs.

Figure 21: Not all hypotheses have perfect out-of-sample error

37
Summary
In this chapter, we have learned what a Perceptron is. We then saw in detail how the
Perceptron Learning Algorithm works and what the motivation behind the update rule is. After
learning that the PLA is guaranteed to converge, we saw that not all hypotheses are equal, and
that some of them will generalize better than others. Eventually, we saw that the Perceptron is
unable to select which hypothesis will have the smallest out-of-sample error and instead just
picks one hypothesis having the lowest in-sample error at random.

38
Chapter 3 The SVM Optimization Problem

SVMs search for the optimal hyperplane


The Perceptron has several advantages: it is a simple model, the algorithm is very easy to
implement, and we have a theoretical proof that it will find a hyperplane that separates the data.
However, its biggest weakness is that it will not find the same hyperplane every time. Why do
we care? Because not all separating hyperplanes are equals. If the Perceptron gives you a
hyperplane that is very close to all the data points from one class, you have a right to believe
that it will generalize poorly when given new data.

SVMs do not have this problem. Indeed, instead of looking for a hyperplane, SVMs tries to find
the hyperplane. We will call this the optimal hyperplane, and we will say that it is the one that
best separates the data.

How can we compare two hyperplanes?


Because we cannot choose the optimal hyperplane based on our feelings, we need some sort
of metric that will allow us to compare two hyperplanes and say which one is superior to all
others.

In this section, we will try to discover how we can compare two hyperplanes. In other words, we
will search for a way to compute a number that allows us to tell which hyperplane separates the
data the best. We will look at methods that seem to work, but then we will see why they do not
work and how we can correct their limitations. Let us try with a simple attempt to compare two
hyperplanes using only the equation of the hyperplane.

Using the equation of the hyperplane


Given an example and a hyperplane, we wish to know how the example relates to the
hyperplane.

One key element we already know is that if the value of satisfies the equation of a line, then it
means it is on the line. It works in the same way for a hyperplane: for a data point and a
hyperplane defined by a vector and bias , we will get if is on the hyperplane.

But what if the point is not on the hyperplane?

Let us see what happens with an example. In Figure 22, the line is defined by
and . When we use the equation of the hyperplane:

• for point , using vector we get


• for point , using vector we get
• for point , using vector we get

39
Figure 22: The equation returns a bigger number for A than for B

As you can see, when the point is not on the hyperplane we get a number different from zero. In
fact, if we use a point far away from the hyperplane, we will get a bigger number than if we use
a point closer to the hyperplane.

Another thing to notice is that the sign of the number returned by the equation tells us where the
point stands with respect to the line. Using the equation of the line displayed in Figure 23, we
get:

• for point

• for point

• for point

Figure 23: The equation returns a negative number for C

If the equation returns a positive number, the point is below the line, while if it is a negative
number, it is above. Note that it is not necessarily visually above or below, because if you have
a line like the one in Figure 24, it will be left or right, but the same logic applies. The sign of the
number returned by the equation of the hyperplane allows us to tell if two points lie on the same
side. In fact, this is exactly what the hypothesis function we defined in Chapter 2 does.

40
Figure 24: A line can separate the space in different ways

We now have the beginning of a solution for comparing two hyperplanes.

Given a training example and a hyperplane defined by a vector and bias , we compute
the number to know how far the point is from the hyperplane.

Given a data set , we compute for each training example,


and say that the number is the smallest we encounter.

If we need to choose between two hyperplanes, we will then select the one from which is the
largest.

To be clear, this means that if we have hyperplanes, we will compute and select the
hyperplane having this .

Problem with examples on the negative side


Unfortunately, using the result of the hyperplane equation has its limitations. The problem is that
taking the minimum value does not work for examples on the negative side (the ones for which
the equation returns a negative value).

Remember that we always wish to take the of the point being the closest to the hyperplane.
Computing with examples on the positive side actually does this. Between two points with
and , we pick the one having the smallest number, so we choose . However,
between two examples having and , this rule will pick because is smaller
than , but the closest point is actually the one with .

One way to fix this problem is to consider the absolute value of .

Given a data set , we compute for each example and say that is the having the smallest
absolute value:

41
Does the hyperplane correctly classify the data?
Computing the number allows us to select a hyperplane. However, using only this value, we
might pick the wrong one. Consider the case in Figure 25: the examples are correctly
classified, and the value of computed using the last formula is 2.

Figure 25: A hyperplane correctly classifying the data

In Figure 26, the examples are incorrectly classified, and the value of is also 2. This is
problematic because using , we do not know which hyperplane is better. In theory, they look
equally good, but in reality, we want to pick the one from Figure 25.

Figure 26: A hyperplane that does not classify the data correctly

How can we adjust our formula to meet this requirement?

Well, there is one component of our training example that we did not use: the !

If we multiply by the value of , we change its sign. Let us call this new number :

The sign of will always be:

• Positive if the point is correctly classified


• Negative if the point is incorrectly classified

42
Given a data set , we can compute:

With this formula, when comparing two hyperplanes, we will still select the one for which is the
largest. The added bonus is that in special cases like the ones in Figure 25 and Figure 26, we
will always pick the hyperplane that classifies correctly (because will have a positive value,
while its value will be negative for the other hyperplane).

In the literature, the number has a name, it is called the functional margin of an example; its
value can be computed in Python, as shown in Code Listing 19. Similarly, the number is
known as the functional margin of the data set .

Code Listing 19

# Compute the functional margin of an example (x,y)


# with respect to a hyperplane defined by w and b.
def example_functional_margin(w, b, x, y):
result = y * (np.dot(w, x) + b)
return result

# Compute the functional margin of a hyperplane


# for examples X with labels y.
def functional_margin(w, b, X, y):
return np.min([example_functional_margin(w, b, x, y[i])
for i, x in enumerate(X)])

Using this formula, we find that the functional margin of the hyperplane in Figure 25 is +2, while
in Figure 26 it is -2. Because it has a bigger margin, we will select the first one.

Tip: Remember, we wish to choose the hyperplane with the largest margin.

Scale invariance
It looks like we found a good way to compare the two hyperplanes this time. However, there is a
major problem with the functional margin: is not scale invariant.

43
Given a vector and bias , if we multiply them by 10, we get and
. We say we rescaled them.

The vectors and , represent the same hyperplane because they have the same unit
vector. The hyperplane being a plane orthogonal to a vector , it does not matter how long the
vector is. The only thing that matters is its direction, which, as we saw in the first chapter, is
given by its unit vector. Moreover, when tracing the hyperplane on a graph, the coordinate of the
intersection between the vertical axis and the hyperplane will be , so the hyperplane
does not change because of the rescaling of , either.

The problem, as we can see in Code Listing 20, is that when we compute the functional margin
with , we get a number ten times bigger than with . This means that given any hyperplane,
we can always find one that will have a larger functional margin, just by rescaling and .

Code Listing 20

x = np.array([1, 1])
y = 1

b_1 = 5
w_1 = np.array([2, 1])

w_2 = w_1 * 10
b_2 = b_1 * 10

print(example_functional_margin(w_1, b_1, x, y)) # 8


print(example_functional_margin(w_2, b_2, x, y)) # 80

To solve this problem, we only need to make a small adjustment. Instead of using the vector ,
we will use its unit vector. To do so, we will divide by its norm. In the same way, we will divide
by the norm of to make it scale invariant as well.

Recall the formula of the functional margin:

We modify it and obtain a new number :

As before, given a data set , we can compute:

The advantage of is that it gives us the same number no matter how large is the vector that
we choose. The number also has a name—it is called the geometric margin of a training
example, while is the geometric margin of the dataset. A Python implementation is shown in
Code Listing 21.

44
Code Listing 21

# Compute the geometric margin of an example (x,y)


# with respect to a hyperplane defined by w and b.
def example_geometric_margin(w, b, x, y):
norm = np.linalg.norm(w)
result = y * (np.dot(w/norm, x) + b/norm)
return result

# Compute the geometric margin of a hyperplane


# for examples X with labels y.
def geometric_margin(w, b, X, y):
return np.min([example_geometric_margin(w, b, x, y[i])
for i, x in enumerate(X)])

We can verify that the geometric margin behaves as expected. In Code Listing 22, the function
returns the same value for the vector or its rescaled version .

Code Listing 22

x = np.array([1,1])
y = 1

b_1 = 5
w_1 = np.array([2,1])

w_2 = w_1*10
b_2 = b_1*10

print(example_geometric_margin(w_1, b_1, x, y)) # 3.577708764


print(example_geometric_margin(w_2, b_2, x, y)) # 3.577708764

It is called the geometric margin because we can retrieve this formula using simple geometry. It
measures the distance between and the hyperplane.

In Figure 27, we see that the point is the orthogonal projection of into the hyperplane. We
wish to find the distance between and .

45
Figure 27: The geometric margin is the distance d between the point X and the hyperplane

The vector has the same direction as the vector , so they share the same unit vector . We
know that the norm of is , so the vector can be defined by .

Moreover, we can see that , so if we substitute for and do a little bit of algebra, we
get:

Now, the point is on the hyperplane. It means that satisfies the equation of the hyperplane,
and we have:

46
Eventually, as we did before, we multiply by to ensure that we select a hyperplane that
correctly classifies the data, and it gives us the geometric margin formula we saw earlier:

Figure 28: A hyperplane defined by w=(-0.4,-1) Figure 29: A hyperplane defined by w=(-0.4,-1)
and b=8 and b=8.5

Now that we have defined the geometric margin, let us see how it allows us to compare two
hyperplanes. We can see that the hyperplane in Figure 28 is closer to the blue star examples
than to the red triangle examples as compared to the one in Figure 29. As a result, we expect its
geometric margin to be smaller. Code Listing 23 uses the function defined in Code Listing 21 to
compute the geometric margin for each hyperplane. As expected from Figure 29, the geometric
margin of the second hyperplane defined by and is larger (0.64 > 0.18).
Between the two, we would select this hyperplane.

Code Listing 23

# Compare two hyperplanes using the geometrical margin.

positive_x = [[2,7],[8,3],[7,5],[4,4],[4,6],[1,3],[2,5]]
negative_x = [[8,7],[4,10],[9,7],[7,10],[9,6],[4,8],[10,10]]

X = np.vstack((positive_x, negative_x))
y = np.hstack((np.ones(len(positive_x)), -1*np.ones(len(negative_x))))

w = np.array([-0.4, -1])
b = 8

# change the value of b


print(geometric_margin(w, b, X, y)) # 0.185695338177
print(geometric_margin(w, 8.5, X, y)) # 0.64993368362

47
We see that to compute the geometric margin for another hyperplane, we just need to modify
the value of or . We could try to change it by a small increment to see if the margin gets
larger, but it is kind of random, and it would take a lot of time. Our objective is to find the optimal
hyperplane for a dataset among all possible hyperplanes, and there is an infinity of
hyperplanes.

Tip: Finding the optimal hyperplane is just a matter of finding the values of w and b
for which we get the largest geometric margin.

How can we find the value of that produces the largest geometric margin? Luckily for us,
mathematicians have designed tools to solve such problems. To find and , we need to solve
what is called an optimization problem. Before looking at what the optimization problem is for
SVMs, let us do a quick review of what an optimization problem is.

What is an optimization problem?

Unconstrained optimization problem


The goal of an optimization problem is to minimize or maximize a function with respect to
some variable x (that is, to find the value of x for with the function returns its minimum or
maximum value). For instance, the problem in which we want to find the minimum of the
function is written:

Or, alternatively:

In this case, we are free to search amongst all possible values of . We say that the problem is
unconstrained. As we can see in Figure 30, the minimum of the function is zero at .

48
Figure 30: Without constraint, Figure 31: Because of the constraint x-2=0,
the minimum is zero the minimum is 4

Constrained optimization problem


Single equality constraint
Sometimes we are not interested in the minimum of the function by itself, but rather its minimum
when some constraints are met. In such cases, we write the problem and add the constraints
preceded by , which is often abbreviated For instance, if we wish to know the
minimum of but restrict the value of to a specific value, we can write:

This example is illustrated in Figure 31. In general, constraints are written by keeping zero on
the right side of the equality so the problem can be rewritten:

Using this notation, we clearly see that the constraint is an affine function while the objective
function is a quadratic function. Thus we call this problem a quadratic optimization
problem or a Quadratic Programming (QP) problem.

49
Feasible set
The set of variables that satisfies the problem constraints is called the feasible set (or feasible
region). When solving the optimization problem, the solution will be picked from the feasible set.
In Figure 31, the feasible set contains only one value, so the problem is trivial. However, when
we manipulate functions with several variables, such as , it allows us to know
from which values we are trying to pick a minimum (or maximum).

For example:

In this problem, the feasible set is the set of all pairs of points , such as .

Multiple equality constraints and vector notation


We can add as many constraints as we want. Here is an example of a problem with three
constraints for the function :

When we have several variables, we can switch to vector notation to improve readability. For
the vector the function becomes , and the problem is written:

When adding constraints, keep in mind that doing so reduces the feasible set. For a solution to
be accepted, all constraints must be satisfied.

For instance, let us look at the following the problem:

We could think that and are solutions, but this is not the case. When , the
constraint is not met; and when , the constraint is not met. The problem
is infeasible.

50
Tip: If you add too many constraints to a problem, it can become infeasible.

If you change an optimization problem by adding a constraint, you make the optimum worse, or,
at best, you leave it unchanged (Gershwin, 2010).

Inequality constraints
We can also use inequalities as constraints:

And we can combine equality constraints and inequality constraints:

How do we solve an optimization problem?


Several methods exist that can solve each type of optimization problem. However, presenting
them is outside the scope of this book. The interested reader can see Optimization Models and
Application (El Ghaoui, 2015) and Convex Optimization (Boyd & Vandenberghe, 2004), two
good books for starting on the subject and that are available online for free (see Bibliography for
details). We will instead focus on the SVMs again and derive an optimization problem allowing
us to find the optimal hyperplane. How to solve the SVMs optimization problem will be explained
in detail in the next chapter.

The SVMs optimization problem


Given a linearly separable training set and a hyperplane with
a normal vector and bias , recall that the geometric margin of the hyperplane is defined
by:

where is the geometric margin of a training example .

The optimal separating hyperplane is the hyperplane defined by the normal vector and bias
for which the geometric margin is the largest.

51
To find and , we need to solve the following optimization problem, with the constraint that the
margin of each example should be greater or equal to :

There is a relationship between the geometric margin and the functional margin:

So we can rewrite the problem:

We can then simplify the constraint by removing the norm on both sides of the inequality:

Recall that we are trying to maximize the geometric margin and that the scale of and does
not matter. We can choose to rescale and as we want, and the geometric margin will not
change. As a result, we decide to scale and so that . It will not affect the result of the
optimization problem.

The problem becomes:

Because it is the same as:

And because we decided to set , this is equivalent to:

This maximization problem is equivalent to the following minimization problem:

52
Tip: You can also read an alternate derivation of this optimization problem on this
page, where I use geometry instead of the functional and geometric margins.

This minimization problem gives the same result as the following:

The factor has been added for later convenience, when we will use QP solver to solve the
problem and squaring the norm has the advantage of removing the square root.

Eventually, here is the optimization problem as you will see it written in most of the literature:

Why did we take the pain of rewriting the problem like this? Because the original optimization
problem was difficult to solve. Instead, we now have convex quadratic optimization problem,
which, although not obvious, is much simpler to solve.

Summary
First, we assumed that some hyperplanes are better than others: they will perform better with
unseen data. Among all possible hyperplanes, we decided to call the “best” hyperplane the
optimal hyperplane. To find the optimal hyperplane, we searched for a way to compare two
hyperplanes, and we ended up with a number allowing us to do so. We realized that this
number also has a geometrical meaning and is called the geometric margin.

We then stated that the optimal hyperplane is the one with the largest geometric margin and
that we can find it by maximizing the margin. To make things easier, we noted that we could
minimize the norm of , the vector normal to the hyperplane, and we will be sure that it will be
the of the optimal hyperplane (because if you recall, is used in the formula for computing
the geometric margin).

53
Chapter 4 Solving the Optimization Problem

Lagrange multipliers
The Italian-French mathematician Giuseppe Lodovico Lagrangia, also known as Joseph-
Louis Lagrange, invented a strategy for finding the local maxima and minima of a function
subject to equality constraint. It is called the method of Lagrange multipliers.

The method of Lagrange multipliers


Lagrange noticed that when we try to solve an optimization problem of the form:

the minimum of is found when its gradient point in the same direction as the gradient of .
In other words, when:

So if we want to find the minimum of under the constraint , we just need to solve for:

Here, the constant is called a Lagrange multiplier.

To simplify the method, we observe that if we define a function , then its


gradient is . As a result, solving for allows us to find the
minimum.

The Lagrange multiplier method can be summarized by these three steps:

1. Construct the Lagrangian function by introducing one multiplier per constraint


2. Get the gradient of the Lagrangian
3. Solve for

The SVM Lagrangian problem


We saw in the last chapter that the SVM optimization problem is:

54
Let us return to this problem. We have one objective function to minimize:

and constraint functions:

We introduce the Lagrangian function:

Note that we introduced one Lagrange multiplier for each constraint function.

We could try to solve for , but the problem can only be solved analytically when the
number of examples is small (Tyson Smith, 2004). So we will once again rewrite the problem
using the duality principle.

To get the solution of the primal problem, we need to solve the following Lagrangian problem:

What is interesting here is that we need to minimize with respect to and , and to maximize
with respect to at the same time.

Tip: You may have noticed that the method of Lagrange multipliers is used for
solving problems with equality constraints, and here we are using them with
inequality constraints. This is because the method still works for inequality
constraints, provided some additional conditions (the KKT conditions) are met. We
will talk about these conditions later.

The Wolfe dual problem


The Lagrangian problem has inequality constraints (where is the number of training
examples) and is typically solved using its dual form. The duality principle tells us that an
optimization problem can be viewed from two perspectives. The first one is the primal problem,
a minimization problem in our case, and the other one is the dual problem, which will be a
maximization problem. What is interesting is that the maximum of the dual problem will always
be less than or equal to the minimum of the primal problem (we say it provides a lower bound to
the solution of the primal problem).

55
In our case, we are trying to solve a convex optimization problem, and Slater’s condition holds
for affine constraints (Gretton, 2016), so Slater’s theorem tells us that strong duality holds.
This means that the maximum of the dual problem is equal to the minimum of the primal
problem. Solving the dual is the same thing as solving the primal, except it is easier.

Recall that the Lagrangian function is:

The Lagrangian primal problem is:

Solving the minimization problem involves taking the partial derivatives of with respect to
and .

From the first equation, we find that:

Let us substitute by this value into :

56
So we successfully removed , but is still used in the last term of the function:

We note that implies that . As a result, the last term is equal to zero, and we
can write:

This is the Wolfe dual Lagrangian function.

The optimization problem is now called the Wolfe dual problem:

Traditionally the Wolfe dual Lagrangian problem is constrained by the gradients being equal to
zero. In theory, we should add the constraints and . However, we only added
the latter. Indeed, we added because it is necessary for removing from the function.

However, we can solve the problem without the constraint .

The main advantage of the Wolfe dual problem over the Lagrangian problem is that the
objective function now depends only on the Lagrange multipliers. Moreover, this formulation
will help us solve the problem in Python in the next section and will be very helpful when we
define kernels later.

Karush-Kuhn-Tucker conditions
Because we are dealing with inequality constraints, there is an additional requirement: the
solution must also satisfy the Karush-Kuhn-Tucker (KKT) conditions.

The KKT conditions are first-order necessary conditions for a solution of an optimization
problem to be optimal. Moreover, the problem should satisfy some regularity conditions. Luckily
for us, one of the regularity conditions is Slater’s condition, and we just saw that it holds for
SVMs. Because the primal problem we are trying to solve is a convex problem, the KKT
conditions are also sufficient for the point to be primal and dual optimal, and there is zero
duality gap.

57
To sum up, if a solution satisfies the KKT conditions, we are guaranteed that it is the
optimal solution.

The Karush-Kuhn-Tucker conditions are:

• Stationarity condition:

• Primal feasibility condition:

• Dual feasibility condition:

• Complementary slackness condition:

Note: “[...]Solving the SVM problem is equivalent to finding a solution to the KKT
conditions.” (Burges, 1988)

Note that we already saw most of these conditions before. Let us examine them one by one.

Stationarity condition
The stationarity condition tells us that the selected point must be a stationary point. It is a point
where the function stops increasing or decreasing. When there is no constraint, the stationarity
condition is just the point where the gradient of the objective function is zero. When we have
constraints, we use the gradient of the Lagrangian.

Primal feasibility condition


Looking at this condition, you should recognize the constraints of the primal problem. It makes
sense that they must be enforced to find the minimum of the function under constraints.

Dual feasibility condition


Similarly, this condition represents the constraints that must be respected for the dual problem.

58
Complementary slackness condition
From the complementary slackness condition, we see that either or .

Support vectors are examples having a positive Lagrange multiplier. They are the ones for
which the constraint is active. (We say the constraint is active when
).

Tip: From the complementary slackness condition, we see that support vectors are
examples that have a positive Lagrange multiplier.

What to do once we have the multipliers?


When we solve the Wolfe dual problem, we get a vector containing all Lagrange multipliers.
However, when we first stated the primal problem, our goal was to find and . Let us see how
we can retrieve these values from the Lagrange multipliers.

Compute w

Computing is pretty simple since we derived the formula: from the gradient .

Compute b
Once we have , we can use one of the constraints of the primal problem to compute :

Indeed, this constraint is still true because we transformed the original problem in such a way
that the new formulations are equivalent. What it says is that the closest points to the
hyperplane will have a functional margin of 1 (the value 1 is the value we chose when we
decided how to scale ):

From there, as we know all other variables, it is easy to come up with the value of . We multiply
both sides of the equation by , and because , it gives us:

However, as indicated in Pattern Recognition and Machine Learning (Bishop, 2006), instead of
taking a random support vector , taking the average provides us with a numerically more
stable solution:

59
where is the number of support vectors.

Other authors, such as (Cristianini & Shawe-Taylor, 2000) and (Ng), use another formula:

They basically take the average of the nearest positive support vector and the nearest negative
support vector. This latest formula is the one originally used by Statistical Learning Theory
(Vapnik V. N., 1998) when defining the optimal hyperplane.

Hypothesis function
The SVMs use the same hypothesis function as the Perceptron. The class of an example is
given by:

When using the dual formulation, it is computed using only the support vectors:

Solving SVMs with a QP solver


A QP solver is a program used to solve quadratic programming problems. In the following
example, we will use the Python package called CVXOPT.

This package provides a method that is able to solve quadratic problems of the form:

It does not look like our optimization problem, so we will need to rewrite it so that we can solve it
with this package.

First, we note that in the case of the Wolfe dual optimization problem, what we are trying to
minimize is , so we can rewrite the quadratic problem with instead of to better see how the
two problems relate:

60
Here the symbol represents component-wise vector inequalities. It means that each row of the
matrix represents an inequality that must be satisfied.

We will change the Wolfe dual problem. First, we transform the maximization problem:

into a minimization problem by multiplying by -1.

Then we introduce vectors and and the Gram matrix of all


possible dot products of vectors :

We use them to construct a vectorized version of the Wolfe dual problem where denotes the
outer product of .

We are now able to find out the value for each of the parameters , , , , , and required by
the CVXOPT qp function. This is demonstrated in Code Listing 24.

Code Listing 24

# See Appendix A for more information about the dataset


from succinctly.datasets import get_dataset, linearly_separable as ls

61
import cvxopt.solvers

X, y = get_dataset(ls.get_training_examples)
m = X.shape[0]

# Gram matrix - The matrix of all possible inner products of X.


K = np.array([np.dot(X[i], X[j])
for j in range(m)
for i in range(m)]).reshape((m, m))

P = cvxopt.matrix(np.outer(y, y) * K)
q = cvxopt.matrix(-1 * np.ones(m))

# Equality constraints
A = cvxopt.matrix(y, (1, m))
b = cvxopt.matrix(0.0)

# Inequality constraints
G = cvxopt.matrix(np.diag(-1 * np.ones(m)))
h = cvxopt.matrix(np.zeros(m))

# Solve the problem


solution = cvxopt.solvers.qp(P, q, G, h, A, b)

# Lagrange multipliers
multipliers = np.ravel(solution['x'])

# Support vectors have positive multipliers.


has_positive_multiplier = multipliers > 1e-7
sv_multipliers = multipliers[has_positive_multiplier]

support_vectors = X[has_positive_multiplier]
support_vectors_y = y[has_positive_multiplier]

Code Listing 24 initializes all the required parameters and passes them to the qp function, which
returns us a solution. The solution contains many elements, but we are only concerned about
the x, which, in our case, corresponds to the Lagrange multipliers.

As we saw before, we can re-compute using all the Lagrange multipliers: . Code
Listing 25 shows the code of the function that computes .

Code Listing 25

def compute_w(multipliers, X, y):


return np.sum(multipliers[i] * y[i] * X[i]
for i in range(len(y)))

62
Because Lagrange multipliers for non-support vectors are almost zero, we can also compute
using only support vectors data and their multipliers, as illustrated in Code Listing 26.

Code Listing 26

w = compute_w(multipliers, X, y)
w_from_sv = compute_w(sv_multipliers, support_vectors, support_vectors_y)

print(w) # [0.44444446 1.11111114]


print(w_from_sv) # [0.44444453 1.11111128]

And we compute b using the average method:

Code Listing 27

def compute_b(w, X, y):


return np.sum([y[i] - np.dot(w, X[i])
for i in range(len(X))])/len(X)

Code Listing 28

b = compute_b(w, support_vectors, support_vectors_y) # -9.666668268506335

When we plot the result in Figure 32, we see that the hyperplane is the optimal hyperplane.
Contrary to the Perceptron, the SVM will always return the same result.

Figure 32: The hyperplane found with CVXOPT

63
This formulation of the SVM is called the hard margin SVM. It cannot work when the data is not
linearly separable. There are several Support Vector Machines formulations. In the next
chapter, we will consider another formulation called the soft margin SVM, which will be able to
work when data is non-linearly separable because of outliers.

Summary
Minimizing the norm of is a convex optimization problem, which can be solved using the
Lagrange multipliers method. When there are more than a few examples, we prefer using
convex optimization packages, which will do all the hard work for us.

We saw that the original optimization problem can be rewritten using a Lagrangian function.
Then, thanks to duality theory, we transformed the Lagrangian problem into the Wolfe dual
problem. We eventually used the package CVXOPT to solve the Wolfe dual.

64
Chapter 5 Soft Margin SVM

Dealing with noisy data


The biggest issue with hard margin SVM is that it requires the data to be linearly separable.
Real-life data is often noisy. Even when the data is linearly separable, a lot of things can happen
before you feed it to your model. Maybe someone mistyped a value for an example, or maybe
the probe of a sensor returned a crazy value. In the presence of an outlier (a data point that
seems to be out of its group), there are two cases: the outlier can be closer to the other
examples than most of the examples of its class, thus reducing the margin, or it can be among
the other examples and break linear separability. Let us consider these two cases and see how
the hard margin SVM deals with them.

Outlier reducing the margin


When the data is linearly separable, the hard margin classifier does not behave as we would
like in the presence of outliers.

Let us now consider our dataset with the addition of an outlier data point at (5, 7), as shown in
Figure 33.

Figure 33: The dataset is still linearly separable with the outlier at (5, 7)

In this case, we can see that the margin is very narrow, and it seems that the outlier is the main
reason for this change. Intuitively, we can see that this hyperplane might not be the best at
separating the data, and that it will probably generalize poorly.

Outlier breaking linear separability


Even worse, when the outlier breaks the linear separability, as the point (7, 8) does in Figure 34,
the classifier is incapable of finding a hyperplane. We are stuck because of a single data point.

65
Figure 34: The outlier at (7, 8) breaks linear separability

Soft margin to the rescue

Slack variables
In 1995, Vapnik and Cortes introduced a modified version of the original SVM that allows the
classifier to make some mistakes. The goal is now not to make zero classification mistakes, but
to make as few mistakes as possible.

To do so, they modified the constraints of the optimization problem by adding a variable
(zeta). So the constraint:

becomes:

As a result, when minimizing the objective function, it is possible to satisfy the constraint even if
the example does not meet the original constraint (that is, it is too close from the hyperplane, or
it is not on the correct side of the hyperplane). This is illustrated in Code Listing 29.

Code Listing 29

import numpy as np

w = np.array([0.4, 1])
b = -10

x = np.array([6, 8])
y = -1

def constraint(w, b, x, y):


return y * (np.dot(w, x) + b)

66
def hard_constraint_is_satisfied(w, b, x, y):
return constraint(w, b, x, y) >= 1

def soft_constraint_is_satisfied(w, b, x, y, zeta):


return constraint(w, b, x, y) >= 1 - zeta

# While the constraint is not satisfied for the example (6,8).


print(hard_constraint_is_satisfied(w, b, x, y)) # False

# We can use zeta = 2 and satisfy the soft constraint.


print(soft_constraint_is_satisfied(w, b, x, y, zeta=2)) # True

The problem is that we could choose a huge value of for every example, and all the
constraints will be satisfied.

Code Listing 30

# We can pick a huge zeta for every point


# to always satisfy the soft constraint.
print(soft_constraint_is_satisfied(w, b, x, y, zeta=10)) # True
print(soft_constraint_is_satisfied(w, b, x, y, zeta=1000)) # True

To avoid this, we need to modify the objective function to penalize the choice of a big :

We take the sum of all individual and add it to the objective function. Adding such a penalty is
called regularization. As a result, the solution will be the hyperplane that maximizes the margin
while having the smallest error possible.

There is still a little problem. With this formulation, one can easily minimize the function by using
negative values of . We add the constraint to prevent this. Moreover, we would like to
keep some control over the soft margin. Maybe sometimes we want to use the hard margin—
after all, that is why we add the parameter , which will help us to determine how important the
should be (more on that later).

This leads us to the soft margin formulation:

67
As shown by (Vapnik V. N., 1998), using the same technique as for the separable case, we find
that we need to maximize the same Wolfe dual as before, under a slightly different
constraint:

Here the constraint has been changed to become . This constraint is often
called the box constraint because the vector is constrained to lie inside the box with side
length in the positive orthant. Note that an orthant is the analog n-dimensional Euclidean
space of a quadrant in the plane (Cristianini & Shawe-Taylor, 2000). We will visualize the box
constraint in Figure 50 in the chapter about the SMO algorithm.

The optimization problem is also called 1-norm soft margin because we are minimizing the 1-
norm of the slack vector .

Understanding what C does


The parameter gives you control of how the SVM will handle errors. Let us now examine how
changing its value will give different hyperplanes.

Figure 35 shows the linearly separable dataset we used throughout this book. On the left, we
can see that setting to gives us the same result as the hard margin classifier. However, if
we choose a smaller value for like we did in the center, we can see that the hyperplane is
closer to some points than others. The hard margin constraint is violated for these examples.
Setting increases this behavior as depicted on the right.

What happens if we choose a very close to zero? Then there is basically no constraint
anymore, and we end up with a hyperplane not classifying anything.

Figure 35: Effect of C=+Infinity, C=1, and C=0.01 on a linearly separable dataset

68
It seems that when the data is linearly separable, sticking with a big is the best choice. But
what if we have some noisy outlier? In this case, as we can see in Figure 36, using
gives us a very narrow margin. However, when we use , we end up with a hyperplane very
close to the one of the hard margin classifier without outlier. The only violated constraint is the
constraint of the outlier, and we are much more satisfied with this hyperplane. This time, setting
ends up violating the constraint of another example, which was not an outlier. This
value of seems to give too much freedom to our soft margin classifier.

Figure 36: Effect of C=+Infinity, C=1, and C=0.01 on a linearly separable dataset with an outlier

Eventually, in the case where the outlier makes the data non-separable, we cannot use
because there is no solution meeting all the hard margin constraints. Instead, we test several
values of , and we see that the best hyperplane is achieved with . In fact, we get the
same hyperplane for all values of greater than or equal to 3. That is because no matter how
hard we penalize it, it is necessary to violate the constraint of the outlier to be able to separate
the data. When we use a small , as before, more constraints are violated.

Figure 37: Effect of C=3, C=1, and C=0.01 on a non-separable dataset with an outlier

Rules of thumb:

• A small will give a wider margin, at the cost of some misclassifications.


• A huge will give the hard margin classifier and tolerates zero constraint violation.
• The key is to find the value of such that noisy data does not impact the solution too
much.

69
How to find the best C?
There is no magic value for that will work for all the problems. The recommended approach to
select is to use grid search with cross-validation (Hsu, Chang, & Lin, A Practical Guide to
Support Vector Classification). The crucial thing to understand is that the value of is very
specific to the data you are using, so if one day you found that C=0.001 did not work for one of
your problems, you should still try this value with another problem, because it will not have the
same effect.

Other soft-margin formulations

2-Norm soft margin


There is another formulation of the problem called the 2-norm (or L2 regularized) soft margin
in which we minimize . This formulation leads to a Wolfe dual problem without
the box constraint. For more information about the 2-norm soft margin, refer to (Cristianini &
Shawe-Taylor, 2000).

nu-SVM
Because the scale of is affected by the feature space, another formulation of the problem has
been proposed: the . The idea is to use a parameter whose value is varied between 0
and 1, instead of the parameter .

Note: “ gives a more transparent parametrization of the problem, which does not
depend on the scaling of the feature space, but only on the noise level in the data.”
(Cristianini & Shawe-Taylor, 2000)

The optimization problem to solve is:

70
Summary
The soft-margin SVM formulation is a nice improvement over the hard-margin classifier. It
allows us to classify data correctly even when there is noisy data that breaks linear separability.
However, the cost of this added flexibility is that we now have an hyperparameter , for which
we need to find a value. We saw how changing the value of impacts the margin and allows
the classifier to make some mistakes in order to have a bigger margin. This once again reminds
us that our goal is to find a hypothesis that will work well on unseen data. A few mistakes on the
training data is not a bad thing if the model generalizes well in the end.

71
Chapter 6 Kernels

Feature transformations

Can we classify non-linearly separable data?


Imagine you have some data that is not separable (like the one in Figure 38), and you would like
to use SVMs to classify it. We have seen that it is not possible because the data is not linearly
separable. However, this last assumption is not correct. What is important to notice here is that
the data is not linearly separable in two dimensions.

Figure 38: A straight line cannot separate the data

Even if your original data is in two dimensions, nothing prevents you from transforming it before
feeding it into the SVM. One possible transformation would be, for instance, to transform every
two-dimensional vector into a three-dimensional vector.

For example, we can do what is called a polynomial mapping by applying the function
defined by:

Code Listing 31 shows this transformation implemented in Python.

Code Listing 31

# Transform a two-dimensional vector x into a three-dimensional vector.


def transform(x):
return [x[0]**2, np.sqrt(2)*x[0]*x[1], x[1]**2]

If you transform the whole data set of Figure 38 and plot the result, you get Figure 39, which
does not show much improvement. However, after some time playing with the graph, we can
see that the data is, in fact, separable in three dimensions (Figure 40 and Figure 41)!

72
Figure 39: The data does not look separable in three dimensions

Figure 40: The data is, in fact, separable by a plane

Figure 41: Another view of the data showing the plane from the side

73
Here is a basic recipe we can use to classify this dataset:

1. Transform every two-dimensional vector into a three-dimensional vector using the


transform method of Code Listing 31.
2. Train the SVMs using the 3D dataset.
3. For each new example we wish to predict, transform it using the transform method
before passing it to the predict method.

Of course, you are not forced to transform the data into three dimensions; it could be five, ten,
or one hundred dimensions.

How do we know which transformation to apply?


Choosing which transformation to apply depends a lot on your dataset. Being able to transform
the data so that the machine learning algorithm you wish to use performs at its best is probably
one key factor of success in the machine learning world. Unfortunately, there is no perfect
recipe, and it will come with experience via trial and error. Before using any algorithm, be sure
to check if there are some common rules to transform the data detailed in the documentation.
For more information about how to prepare your data, you can read the dataset transformation
section on the scikit-learn website.

What is a kernel?
In the last section, we saw a quick recipe to use on the non-separable dataset. One of its main
drawbacks is that we must transform every example. If we have millions or billions of examples
and that transform method is complex, that can take a huge amount of time. This is when
kernels come to the rescue.

If you recall, when we search for the KKT multipliers in the Wolfe dual Lagrangian function, we
do not need the value of a training example ; we only need the value of the dot product
between two training examples:

In Code Listing 32, we apply the first step of our recipe. Imagine that when the data is used to
learn, the only thing we care about is the value returned by the dot product, in this example
8,100.

Code Listing 32

x1 = [3,6]
x2 = [10,10]

x1_3d = transform(x1)
x2_3d = transform(x2)

74
print(np.dot(x1_3d,x2_3d)) # 8100

The question is this: Is there a way to compute this value, without transforming the
vectors?

And the answer is: Yes, with a kernel!

Let us consider the function in Code Listing 33:

Code Listing 33

def polynomial_kernel(a, b):


return a[0]**2 * b[0]**2 + 2*a[0]*b[0]*a[1]*b[1] + a[1]**2 * b[1]**2

Using this function with the same two examples as before returns the same result (Code Listing
33).

Code Listing 34

x1 = [3,6]
x2 = [10,10]

# We do not transform the data.

print(polynomial_kernel(x1, x2)) # 8100

When you think about it, this is pretty incredible.

The vectors and belong to . The kernel function computes their dot product as if they
have been transformed into vectors belonging to , and it does that without doing the
transformation, and without computing their dot product!

To sum up: a kernel is a function that returns the result of a dot product performed in another
space. More formally, we can write:

Definition: Given a mapping function , we call the function defined by


, where denotes an inner product in , a kernel function.

The kernel trick


Now that we know what a kernel is, we will see what the kernel trick is.

If we define a kernel as: , we can then rewrite the soft-margin dual problem:

75
That’s it. We have made a single change to the dual problem—we call it the kernel trick.

Tip: Applying the kernel trick simply means replacing the dot product of two
examples by a kernel function.

This change looks very simple, but remember that it took a serious amount of work to derive the
Wolf dual formulation from the original optimization problem. We now have the power to change
the kernel function in order to classify non-separable data.

Of course, we also need to change the hypothesis function to use the kernel function:

Remember that in this formula is the set of support vectors. Looking at this formula, we better
understand why SVMs are also called sparse kernel machines. It is because they only need to
compute the kernel function on the support vectors and not on all the vectors, like other kernel
methods (Bishop, 2006).

Kernel types

Linear kernel
This is the simplest kernel. It is simply defined by:

where and are two vectors.

In practice, you should know that a linear kernel works well for text classification.

Polynomial kernel
We already saw the polynomial kernel earlier when we introduced kernels, but this time we will
consider the more generic version of the kernel:

76
It has two parameters: , which represents a constant term, and , which represents the degree
of the kernel. This kernel can be implemented easily in Python, as shown in Code Listing 35.

Code Listing 35

def polynomial_kernel(a, b, degree, constant=0):


result = sum([a[i] * b[i] for i in range(len(a))]) + constant
return pow(result, degree)

In Code Listing 36, we see that it returns the same result as the kernel of Code Listing 33 when
we use the degree 2. The result of training a SVM with this kernel is shown in Figure 42.

Code Listing 36

x1 = [3,6]
x2 = [10,10]
# We do not transform the data.

print(polynomial_kernel(x1, x2, degree=2)) # 8100

Figure 42: A SVM using a polynomial kernel is able to separate the data (degree=2)

Updating the degree


A polynomial kernel with a degree of 1 and no constant is simply the linear kernel (Figure 43).
When you increase the degree of a polynomial kernel, the decision boundary will become more
complex and will have a tendency to be influenced by individual data examples, as illustrated in
Figure 44. Using a high-degree polynomial is dangerous because you can often achieve better
performance on your test set, but it leads to what is called overfitting: the model is too close to
the data and does not generalize well.

77
Figure 43: A polynomial kernel with degree = 1 Figure 44: A polynomial kernel with degree = 6

Note: Using a high-degree polynomial kernel will often lead to overfitting.

RBF or Gaussian kernel


Sometimes polynomial kernels are not sophisticated enough to work. When you have a difficult
dataset like the one depicted in Figure 45, this type of kernel will show its limitation.

Figure 45: This dataset is more difficult to work with

As we can see in Figure 46, the decision boundary is very bad at classifying the data.

78
Figure 46: A polynomial kernel is not able to separate the data (degree=3, C=100)

This case calls for us to use another, more complicated, kernel: the Gaussian kernel. It is also
named RBF kernel, where RBF stands for Radial Basis Function. A radial basis function is a
function whose value depends only on the distance from the origin or from some point.

The RBF kernel function is:

You will often read that it projects vectors into an infinite dimensional space. What does this
mean?

Recall this definition: a kernel is a function that returns the result of a dot product performed in
another space.

In the case of the polynomial kernel example we saw earlier, the kernel returned the result of a
dot product performed in . As it turns out, the RBF kernel returns the result of a dot product
performed in .

I will not go into details here, but if you wish, you can read this proof to better understand how
we came to this conclusion.

79
Figure 47: The RBF kernel classifies the data correctly with gamma = 0.1

This video is particularly useful to understand how the RBF kernel is able to separate the data.

Changing the value of gamma

Figure 48: A Gaussian kernel with gamma = 1e-5 Figure 49: A Gaussian kernel with gamma = 2

When gamma is too small, as in Figure 48, the model behaves like a linear SVM. When gamma
is too large, the model is too heavily influenced by each support vector, as shown in Figure 49.
For more information about gamma, you can read this scikit-learn documentation page.

Other types
Research on kernels has been prolific, and there are now a lot of kernels available. Some of
them are specific to a domain, such as the string kernel, which can be used when working with
text. If you want to discover more kernels, this article from César Souza describes 25 kernels.

80
Which kernel should I use?
The recommended approach is to try a RBF kernel first, because it usually works well. However,
it is good to try the other types of kernels if you have enough time to do so. A kernel is a
measure of the similarity between two vectors, so that is where domain knowledge of the
problem at hand may have the biggest impact. Building a custom kernel can also be a
possibility, but it requires that you have a good mathematical understanding of the theory behind
kernels. You can find more information on this subject in (Cristianini & Shawe-Taylor, 2000).

Summary
The kernel trick is one key component making Support Vector Machines powerful. It allows us to
apply SVMs on a wide variety of problems. In this chapter, we saw the limitations of the linear
kernel, and how a polynomial kernel can classify non-separable data. Eventually, we saw one of
the most used and most powerful kernels: the RBF kernel. Do not forget that there are many
kernels, and try looking for kernels created to solve the kind of problems you are trying to solve.
Using the right kernel with the right dataset is one key element in your success or failure with
SVMs.

81
SHOPMART
Search for something... Filters John Watson

Dashboard Revenue by Product Categories


Laptop: 56%
Orders
Online Orders offline Orders Total users
Products 23456 345 945 65 9789 95

Sales

January 2022 Customers


Analytics
Sales Overview
S M T W T F S Monthly
Message
26 27 28 29 30 31 1 Mobile: 25%
Accessories: 19%
2 3 4 5 6 7 8 $51,456
OTHER
9 10 11 12 13 14 15 Laptop Mobile Accessories

16 17 18 19 20 21 22 Users

23 24 25 26 27 28 29 Top Sale Products


Teams
Cash
30 31 1 2 3 4 5
Setting Apple iPhone 13 Pro $999.00
$1500
Order Delivery Stats Mobile +12.8%
100K
Completed
120 Apple Macbook Pro $1299.00 50K
In Progress Invoices New Invoice Laptop +32.8%
25K
24
Order id Date Client name Amount Status
Galaxy S22 Ultra $499.99 0
Mobile +22.8%
Completed 10 May 11 May 12 May Today
Log Out #1208 Jan 21, 2022 Olive Yew $1,534.00
Dell Inspiron 55 $899.00

Get your Free .NET and JavaScript UI Components

syncfusion.com/communitylicense

1,700+ components for mobile, web, and desktop platforms

Support within 24 hours on all business days

Uncompromising quality

Hassle-free licensing

4.6 out of
28000+ customers

5 stars
20+ years in business

Trusted by the world's leading companies


Chapter 7 The SMO Algorithm

We saw how to solve the SVM optimization problem using a convex optimization package.
However, in practice, we will use an algorithm specifically created to solve this problem quickly:
the SMO (sequential minimal optimization) algorithm. Most machine learning libraries use
the SMO algorithm or some variation.

The SMO algorithm will solve the following optimization problem:

It is a kernelized version of the soft-margin formulation we saw in Chapter 5. The objective


function we are trying to minimize can be written in Python (Code Listing 37):

Code Listing 37

def kernel(x1, x2):


return np.dot(x1, x2.T)

def objective_function_to_minimize(X, y, a, kernel):


m, n = np.shape(X)
return 1 / 2 * np.sum([a[i] * a[j] * y[i] * y[j]* kernel(X[i, :], X[j, :])
for j in range(m)
for i in range(m)])\
- np.sum([a[i] for i in range(m)])

This is the same problem we solved using CVXOPT. Why do we need another method?
Because we would like to be able to use SVMs with big datasets, and using convex optimization
packages usually involves matrix operations that take a lot of time as the size of the matrix
increases or become impossible because of memory limitations. The SMO algorithm has been
created with the goal of being faster than other methods.

82
The idea behind SMO
When we try to solve the SVM optimization problem, we are free to change the values of as
long as we respect the constraints. Our goal is to modify so that in the end, the objective
function returns the smallest possible value. In this context, given a vector of
Lagrange multipliers, we can change the value of any until we reach our goal.

The idea behind SMO is quite easy: we will solve a simpler problem. That is, given a vector
, we will allow ourselves to change only two values of , for instance, and
. We will change them until the objective function reaches its minimum given this set of
alphas. Then we will pick two other alphas and change them until the function returns its
smallest value, and so on. If we continue doing that, we will eventually reach the minimum of the
objective function of the original problem.

SMO solves a sequence of several simpler optimization problems.

How did we get to SMO?


This idea of solving several simpler optimization problems is not new. In 1982, Vapnik proposed
a method known as “chunking,” which breaks the original problem down into a series of smaller
problems (Vapnik V. , 1982). What made things change is that in 1997, Osuna, et al., proved
that solving a sequence of sub-problems will be guaranteed to converge as long as we add at
least one example violating the KKT conditions (Osuna, Freund, & Girosi, 1997).

Using this result, one year later, in 1998, Platt proposed the SMO algorithm.

Why is SMO faster?


The great advantage of the SMO approach is that we do not need a QP solver to solve the
problem for two Lagrange multipliers—it can be solved analytically. As a consequence, it does
not need to store a huge matrix, which can cause problems with machine memory. Moreover,
SMO uses several heuristics to speed up the computation.

The SMO algorithm


The SMO algorithm is composed of three parts:

• One heuristic to choose the first Lagrange multiplier


• One heuristic to choose the second Lagrange multiplier
• The code to solve the optimization problem analytically for the two chosen multipliers

83
Tip: A Python implementation of the algorithm is available in Appendix B: The
SMO Algorithm. All code listings in this section are taken from this appendix and do
not work alone.

The analytical solution


At the beginning of the algorithm, we start with a vector in which
. The idea is to pick two elements of this vector, which we will name
and , and to change their values so that the constraints are still respected.

The first constraint means that and . That


is why we are forced to select a value lying in the blue box of Figure 50 (which displays an
example where ).

The second constraint is a linear constraint . It forces the values to lie on the red

diagonal, and the first couple of selected and should have different labels ( ).

Figure 50: The feasible set is the diagonal of the box

In general, to avoid breaking the linear constraint, we must change the multipliers so that:

We will not go into the details of how the problem is solved analytically, as it is done very well in
(Cristianini & Shawe-Taylor, 2000) and in (Platt J. C., 1998).

84
Remember that there is a formula to compute the new :

with being the difference between the output of the hypothesis function and the
example label. is the kernel function. We also compute bounds, which applies to ; it
cannot be smaller than the lower bound, or larger than the upper bound, or constraints will be
violated. So is clipped if this is the case.

Once we have this new value, we use it to compute the new using this formula:

Understanding the first heuristic


The idea behind the first heuristic is pretty simple: each time SMO examines an example, it
checks whether or not the KKT conditions are violated. Recall that at least one KKT condition
must be violated. If the conditions are met, then it tries another example. So if there are millions
of examples, and only a few of them violate the KKT conditions, it will spend a lot of time
examining useless examples. In order to avoid that, the algorithm concentrates its time on
examples in which the Lagrange multiplier is not equal to 0 or , because they are the most
likely to violate the conditions (Code Listing 38).

Code Listing 38

def get_non_bound_indexes(self):
return np.where(np.logical_and(self.alphas > 0,
self.alphas < self.C))[0]

# First heuristic: loop over examples where alpha is not 0 and not C
# they are the most likely to violate the KKT conditions
# (the non-bound subset).
def first_heuristic(self):
num_changed = 0
non_bound_idx = self.get_non_bound_indexes()

for i in non_bound_idx:
num_changed += self.examine_example(i)
return num_changed

Because solving the problem analytically involves two Lagrange multipliers, it is possible that a
bound multiplier (whose value is between 0 and ) has become KKT-violated. That is why the
main routine alternates between all examples and the non-bound subset (Code Listing 39). Note
that the algorithm finishes when progress is no longer made.

85
Code Listing 39

def main_routine(self):
num_changed = 0
examine_all = True

while num_changed > 0 or examine_all:


num_changed = 0

if examine_all:
for i in range(self.m):
num_changed += self.examine_example(i)
else:
num_changed += self.first_heuristic()

if examine_all:
examine_all = False
elif num_changed == 0:
examine_all = True

Understanding the second heuristic


The goal of this second heuristic is to select the Lagrange multiplier for which the step taken will
be maximal.

How do we update ? We use the following formula:

Remember that in this case that we have already chosen the value . Our goal is to pick the
whose will have the biggest change. This formula can be rewritten as follows:

with:

So, to pick the best amongst several , we need to compute the value of step for each
and select the one with the biggest step. The problem here is that we need to call the kernel
function three times for each step, and this is costly. Instead of doing that, Platt came with the
following approximation:

As a result, selecting the biggest step is done by taking the with the smallest error if is
positive, and the with the biggest error if is negative.

86
This approximation is visible in the method second_heuristic of Code Listing 40.

Code Listing 40

def second_heuristic(self, non_bound_indices):


i1 = -1
if len(non_bound_indices) > 1:
max = 0

for j in non_bound_indices:
E1 = self.errors[j] - self.y[j]
step = abs(E1 - self.E2) # approximation
if step > max:
max = step
i1 = j
return i1

def examine_example(self, i2):


self.y2 = self.y[i2]
self.a2 = self.alphas[i2]
self.X2 = self.X[i2]
self.E2 = self.get_error(i2)

r2 = self.E2 * self.y2

if not((r2 < -self.tol and self.a2 < self.C) or


(r2 > self.tol and self.a2 > 0)):
# The KKT conditions are met, SMO looks at another example.
return 0

# Second heuristic A: choose the Lagrange multiplier that


# maximizes the absolute error.
non_bound_idx = list(self.get_non_bound_indexes())
i1 = self.second_heuristic(non_bound_idx)

if i1 >= 0 and self.take_step(i1, i2):


return 1

# Second heuristic B: Look for examples making positive


# progress by looping over all non-zero and non-C alpha,
# starting at a random point.
if len(non_bound_idx) > 0:
rand_i = randrange(len(non_bound_idx))
for i1 in non_b ound_idx[rand_i:] + non_bound_idx[:rand_i]:
if self.take_step(i1, i2):
return 1

# Second heuristic C: Look for examples making positive progress


# by looping over all possible examples, starting at a random

87
# point.
rand_i = randrange(self.m)
all_indices = list(range(self.m))
for i1 in all_indices[rand_i:] + all_indices[:rand_i]:
if self.take_step(i1, i2):
return 1

# Extremely degenerate circumstances, SMO skips the first example.


return 0

Summary
Understanding the SMO algorithm can be tricky because a lot of the code is here for
performance reasons, or to handle specific degenerate cases. However, at its core, the
algorithm remains simple and is faster than convex optimization solvers. Over time, people have
discovered new heuristics to improve this algorithm, and popular libraries like LIBSVM use an
SMO-like algorithm. Note that even if this is the standard way of solving the SVM problem, other
methods exist, such as gradient descent and stochastic gradient descent (SGD), which is
particularly used for online learning and dealing with huge datasets.

Knowing how the SMO algorithm works will help you decide if it is the best method for the
problem you want to solve. I strongly advise you to try implementing it yourself. In the Stanford
CS229 course, you can find the description of a simplified version of the algorithm, which is a
good start. Then, in Sequential Minimal Optimization (Platt J. C., 1998), you can read the full
description of the algorithm. The Python code available in Appendix B has been written from the
pseudo-code from this paper and indicates in comments which parts of the code correspond to
which equations in the paper.

88
Chapter 8 Multi-Class SVMs

SVMs are able to generate binary classifiers. However, we are often faced with datasets having
more than two classes. For instance, the original wine dataset actually contains data from three
different producers. There are several approaches that allow SVMs to work for multi-class
classification. In this chapter, we will review some of the most popular multi-class methods and
explain where they come from.

For all code examples in this chapter, we will use the dataset generated by Code Listing 41 and
displayed in Figure 51.

Code Listing 41

import numpy as np

def load_X():
return np.array([[1, 6], [1, 7], [2, 5], [2, 8],
[4, 2], [4, 3], [5, 1], [5, 2],
[5, 3], [6, 1], [6, 2], [9, 4],
[9, 7], [10, 5], [10, 6], [11, 6],
[5, 9], [5, 10], [5, 11], [6, 9],
[6, 10], [7, 10], [8, 11]])

def load_y():
return np.array([1, 1, 1, 1,
2, 2, 2, 2, 2, 2, 2,
3, 3, 3, 3, 3,
4, 4, 4, 4, 4, 4, 4])

Figure 51: A four classes classification problem

89
Solving multiple binary problems

One-against-all
Also called “one-versus-the-rest,” this is probably the simplest approach.

In order to classify K classes, we construct K different binary classifiers. For a given class, the
positive examples are all the points in the class, and the negative examples are all the points
not in the class (Code Listing 42).
Code Listing 42

import numpy as np
from sklearn import svm

# Create a simple dataset


X = load_X()
y = load_y()

# Transform the 4 classes problem


# in 4 binary classes problems.
y_1 = np.where(y == 1, 1, -1)
y_2 = np.where(y == 2, 1, -1)
y_3 = np.where(y == 3, 1, -1)
y_4 = np.where(y == 4, 1, -1)

We train one binary classifier on each problem (Code Listing 43). As a result, we obtain one
decision boundary per classifier (in Figure 52).

Code Listing 43

# Train one binary classifier on each problem.


y_list = [y_1, y_2, y_3, y_4]
classifiers = []
for y_i in y_list:
clf = svm.SVC(kernel='linear', C=1000)
clf.fit(X, y_i)
classifiers.append(clf)

90
Figure 52: The One-against-all approach creates one classifier per class

In order to make a new prediction, we use each classifier and predict the class of the classifier if
it returns a positive answer (Code Listing 44). However, this can give inconsistent results
because a label is assigned to multiple classes simultaneously or to none (Bishop, 2006).
Figure illustrates this problem; the one-against-all classifier is not able to predict a class for the
examples in the blue areas in each corner because two classifiers are making a positive
prediction. This would result in the example having two class simultaneously. The same
problem occurs in the center because each classifier makes a negative prediction. As a result,
no class can be assigned to an example in this region.

Code Listing 44

def predict_class(X, classifiers):


predictions = np.zeros((X.shape[0], len(classifiers)))
for idx, clf in enumerate(classifiers):
predictions[:, idx] = clf.predict(X)

# returns the class number if only one classifier predicted it


# returns zero otherwise.
return np.where((predictions == 1).sum(1) == 1,
(predictions == 1).argmax(axis=1) + 1,
0)

91
Figure 53: One-against-all leads to ambiguous decisions

As an alternative solution, Vladimir Vapnik suggested using the class of the classifier for which
the value of the decision function is the maximum (Vapnik V. N., 1998). This is demonstrated in
Code Listing 45. Note that we use the decision_function instead of calling the predict
method of the classifier. This method returns a real value that will be positive if the example is
on the correct side of the classifier, and negative if it is on the other side. It is interesting to note
that by taking the maximum of the value, and not the maximum of the absolute value, this
approach will choose the class of the hyperplane the closest to the example when all classifiers
disagree. For instance, the example point (6,4) in Figure will be assigned the blue star class.

Code Listing 45

def predict_class(X, classifiers):


predictions = np.zeros((X.shape[0], len(classifiers)))
for idx, clf in enumerate(classifiers):
predictions[:, idx] = clf.decision_function(X)

# return the argmax of the decision function as suggested by Vapnik.


return np.argmax(predictions, axis=1) + 1

Applying this heuristic gives us classification results with no ambiguity, as shown in Figure . The
major flaw of this approach is that the different classifiers were trained on different tasks, so
there is no guarantee that the quantities returned by the decision_function have the same
scale (Bishop, 2006). If one decision function returns a result ten times bigger than results of the
others, its class will be assigned incorrectly to some examples.

92
Figure 54: Applying a simple heuristic avoids the ambiguous decision problem

Another issue with the one-against-all approach is that training sets are imbalanced (Bishop,
2006). For a problem with 100 classes, each having 10 examples, each classifier will be trained
with 10 positive examples and 990 negative examples. Thus, the negative examples will
influence the decision boundary greatly.

Nevertheless, one-against-all remains a popular method for multi-class classification because it


is easy to implement and understand.

Note: “[...] In practice the one-versus-the-rest approach is the most widely used in
spite of its ad-hoc formulation and its practical limitations.” (Bishop, 2006)

When using sklearn, LinearSVC automatically uses the one-against-all strategy by default.
You can also specify it explicitly by setting the multi_class parameter to ovr (one-vs-the-rest),
as shown in Code Listing 46.

Code Listing 46

from sklearn.svm import LinearSVC


import numpy as np

X = load_X()
y = load_y()

clf = LinearSVC(C=1000, random_state=88, multi_class='ovr')


clf.fit(X,y)

# Make predictions on two examples.


X_to_predict = np.array([[5,5],[2,5]])
print(clf.predict(X_to_predict)) # prints [2 1]

93
One-against-one
In this approach, instead of trying to distinguish one class from all the others, we seek to
distinguish one class from another one. As a result, we train one classifier per pair of classes,
which leads to K(K-1)/2 classifiers for K classes. Each classifier is trained on a subset of the
data and produces its own decision boundary (Figure ).

Predictions are made using a simple voting strategy. Each example we wish to predict is
passed to each classifier, and the predicted class is recorded. Then, the class having the most
votes is assigned to the example (Code Listing 47).

Code Listing 47

from itertools import combinations


from scipy.stats import mode
from sklearn import svm
import numpy as np

# Predict the class having the max number of votes.


def predict_class(X, classifiers, class_pairs):
predictions = np.zeros((X.shape[0], len(classifiers)))
for idx, clf in enumerate(classifiers):
class_pair = class_pairs[idx]
prediction = clf.predict(X)
predictions[:, idx] = np.where(prediction == 1,
class_pair[0], class_pair[1])
return mode(predictions, axis=1)[0].ravel().astype(int)

X = load_X()
y = load_y()

# Create datasets.
training_data = []
class_pairs = list(combinations(set(y), 2))
for class_pair in class_pairs:
class_mask = np.where((y == class_pair[0]) | (y == class_pair[1]))
y_i = np.where(y[class_mask] == class_pair[0], 1, -1)
training_data.append((X[class_mask], y_i))

# Train one classifier per class.


classifiers = []
for data in training_data:
clf = svm.SVC(kernel='linear', C=1000)
clf.fit(data[0], data[1])
classifiers.append(clf)

# Make predictions on two examples.


X_to_predict = np.array([[5,5],[2,5]])
print(predict_class(X_to_predict, classifiers, class_pairs))

94
# prints [2 1]

Figure 55: One-against-one construct with one classifier for each pair of classes

With this approach, we are still faced with the ambiguous classification problem. If two classes
have an identical number of votes, it has been suggested that selecting the one with the smaller
index might be a viable (while probably not the best) strategy (Hsu & Lin, A Comparison of
Methods for Multi-class Support Vector Machines, 2002).

Figure 56: Predictions are made using a voting scheme

Figure shows us that the decision regions generated by the one-against-one strategy are
different from the ones generated by one-against-all (Figure ). In Figure , it is interesting to note
that for regions generated by the one-against-one classifier, a region changes its color only after
traversing a hyperplane (denoted by black lines), while this is not the case with one-against-all.

95
Figure 57: Comparison of one-against-all (left) and one-against-one (right)

The one-against-one approach is the default approach for multi-class classification used in
sklearn. Instead of Code Listing 47, you will obtain the exact same results using the code of
Code Listing 48.

Code Listing 48

from sklearn import svm


import numpy as np

X = load_X()
y = load_y()

# Train a multi-class classifier.


clf = svm.SVC(kernel='linear', C=1000)
clf.fit(X,y)

# Make predictions on two examples.


X_to_predict = np.array([[5,5],[2,5]])
print(clf.predict(X_to_predict)) # prints [2 1]

One of the main drawbacks of the one-against-all method is that the classifier will tend to overfit.
Moreover, the size of the classifier grows super-linearly with the number of classes, so this
method will be slow for large problems (Platt, Cristianini, & Shawe-Taylor, 2000).

DAGSVM
DAGSVM stands for “Directed Acyclic Graph SVM.” It has been proposed by John Platt et al. in
2000 as an improvement of one-against-one (Platt, Cristianini, & Shawe-Taylor, 2000).

96
Note: John C. Platt invented the SMO algorithm and Platt Scaling, and proposed
the DAGSVM. Quite a contribution to the SVMs world!

The idea behind DAGSVM is to use the same training as one-against-one, but to speed up
testing by using a directed acyclic graph (DAG) to choose which classifiers to use.

If we have four classes A, B, C, and D, and six classifiers trained each on a pair of classes: (A,
B); (A, C); (A, D); (B, C); (B, D); and (C, D). We use the first classifier, (A, D), and it predicts
class A, which is the same as predicting not class D, and the second classifier also predits
class A (not class C). It means that classifiers (B, D), (B, C) or (C, D) can be ignored because
we already know the class is neither C nor D. The last “useful” classifier is (A, B), and if it
predicts B, we assign the class B to the data-point. This example is illustrated with the red path
in Figure . Each node of the graph is a classifier for a pair of class.

Figure 58: Illustration of the path used to make a prediction along a Directed Acyclic graph

With four classes, we used three classifiers to make the prediction, instead of six with one-
against-one. In general, for a problem with K classes, K-1 decision nodes will be evaluated.

Substituting the predict_class function in Code Listing 47 with the one in Code Listing 49
gives the same result, but with the benefit of using fewer classifiers.

97
In Code Listing 49, we implement the DAGSVM approach with a list. We begin with the list of
possible classes, and after each prediction, we remove the one that has been disqualified. In
the end, the remaining class is the one which should be assigned to the example.

Note that Code Listing 49 is here for illustration purposes and should not be used in your
production code, as it is not fast when the dataset (X) is large.

Code Listing 49

def predict_class(X, classifiers, distinct_classes, class_pairs):


results = []
for x_row in X:

class_list = list(distinct_classes)

# After each prediction, delete the rejected class


# until there is only one class.
while len(class_list) > 1:
# We start with the pair of the first and
# last element in the list.
class_pair = (class_list[0], class_list[-1])
classifier_index = class_pairs.index(class_pair)
y_pred = classifiers[classifier_index].predict(x_row)

if y_pred == 1:
class_to_delete = class_pair[1]
else:
class_to_delete = class_pair[0]

class_list.remove(class_to_delete)

results.append(class_list[0])
return np.array(results)

Note: “The DAGSVM is between a factor 1.6 and 2.3 times faster to evaluate than
Max Wins.” (Platt, Cristianini, & Shawe-Taylor, 2000).

Solving a single optimization problem


Instead of trying to solve several binary optimization problems, another approach is to try to
solve a single optimization problem. This approach has been proposed by several people over
the years.

98
Vapnik, Weston, and Watkins
This method is a generalization of the SVMs optimization problem to solve the multi-class
classification problem directly. It has been independently discovered by Vapnik (Vapnik V. N.,
1998) and Weston & Watkins (Weston & Watkins, 1999). For every class, constraints are added
to the optimization problem. As a result, the size of the problem is proportional to the number of
classes and can be very slow to train.

Crammer and Singer


Crammer and Singer (C&S) proposed an alternative approach to multi-class SVMs. Like
Weston and Watkins, they solve a single optimization problem, but with fewer slack variables
(Crammer & Singer, 2001). This has the benefit of reducing the memory and training time.
However, in their comparative study, Hsu & Lin found that the C&S method was especially slow
when using a large value for the C regularization parameter (Hsu & Lin, A Comparison of
Methods for Multi-class Support Vector Machines, 2002).

In sklearn, when using LinearSVC you can choose to use the C&S algorithm (Code Listing
50). In Figure , we can see that the C&S predictions are different from the one-against-all and
the one-against-one methods.

Code Listing 50

from sklearn import svm


import numpy as np

X = load_X()
y = load_y()

clf = svm.LinearSVC(C=1000, multi_class='crammer_singer')


clf.fit(X,y)

# Make predictions on two examples.


X_to_predict = np.array([[5,5],[2,5]])
print(clf.predict(X_to_predict)) # prints [4 1]

99
Figure 59: Crammer & Singer algorithm predictions

Which approach should you use?


With so many options available, choosing which multi-class approach is better suited for your
problem can be difficult.

Hsu and Lin wrote an interesting paper comparing the different multi-class approaches available
for SVMs (Hsu & Lin, A Comparison of Methods for Multi-class Support Vector Machines, 2002).
They conclude that “the one-against-one and DAG methods are more suitable for practical use
than the other methods.” The one-against-one method has the added advantage of being
already available in sklearn, so it should probably be your default choice.

Be sure to remember that LinearSVC uses the one-against-all method by default, and that
maybe using the Crammer & Singer algorithm will better help you achieve your goal. On this
topic, Dogan et al. found that despite being considerably faster than other algorithms, one-
against-all yield hypotheses with a statistically significantly worse accuracy (Dogan,
Glasmachers, & Igel, 2011). Table 1 provides an overview of the methods presented in this
chapter to help you make a choice.

100
Table 1: Overview of multi-class SVM methods

Method One- One- Weston and DAGSVM Crammer


name against-all against-one Watkins and Singer

First 1995 1996 1999 2000 2001


SVMs
usage

Approach Use several Use several Solve a Use several Solve a


binary binary single binary single
classifiers classifiers optimization classifiers optimization
problem problem

Training Train a Train a Decomposition Same as Decomposition


approach single classifier for method one- method
classifier for each pair of against-one
each class classes

Number 1 1
of trained
classifiers
( is the
number of
classes)

Testing Select the “Max-Wins” Use the Use a DAG Use the
approach class with voting classifier to make classifier
the biggest strategy predictions
decision on K-1
function classifiers
value

scikit- LinearSVC SVC Not available Not LinearSVC


learn available
class

Drawbacks Class Long Long training Not Long training


imbalance training time available in time
time for popular
large K libraries

101
Summary
Thanks to many improvements over the years, there are now several methods for doing multi-
class classification with SVMs. Each approach has advantages and drawbacks, and most of the
time you will end up using the one available in the library you are using. However, if necessary,
you now know which method can be more helpful to solve your specific problem.

Research on multi-class SVMs is not over. Recent papers on the subject have been focused on
distributed training. For instance, Han & Berg have presented a new algorithm called
“Distributed Consensus Multiclass SVM,” which uses consensus optimization with a modified
version of Crammer & Singer’s formulation (Han & Berg, 2012).

102
Conclusion

To conclude, I will quote Stuart Russel and Peter Norvig, who wrote:

“You could say that SVMs are successful because of one key insight, one neat trick.”

(Russell & Norvig, 2010)

The key insight is the fact that some examples are more important than others. They are the
closest to the decision boundary, and we call them support vectors. As a result, we discover
that the optimal hyperplane generalizes better than other hyperplanes, and can be constructed
using support vectors only. We saw in detail that we need to solve a convex optimization
problem to find this hyperplane.

The neat trick is the kernel trick. It allows us to use SVMs with non-separable data, and without
it, SVMs would be very limited. We saw that this trick, while it can be difficult to grasp at first, is
in fact quite simple, and can be reused in other learning algorithms.

That’s it. If you have read this book cover to cover, you should now understand how SVMs
work. Another interesting question is why do they work? It is the subject of a field called
computational learning theory (SVMs are in fact coming from statistical learning theory). If you
wish to learn more about this, you can follow this outstanding course or read Learning from Data
(Abu-Mostafa, 2012), which provides a very good introduction on the subject.

You should know that SVMs are not used only for classification. One-Class SVM can be used
for anomaly detection, and Support Vector Regression can be used for regression. They have
not been included in this book in order to keep it succinct, but they are equally interesting topics.
Now that you understand the basic SVMs, you should be better prepared to study these
derivations.

SVMs will not be the solution to all your problems, but I do hope they will now be a tool in your
machine-learning toolbox—a tool that you understand, and that you will enjoy using.

103
Appendix A: Datasets

Linearly separable dataset


The following code is used to load the simple linearly separable dataset used in most chapters
of this book. You can find the code source of the other datasets used in this book in this
Bitbucket repository.

Figure 60: The training set Figure 61: The test set

When a code listing imports the module as in Code Listing 51, it loads the methods displayed in
Code Listing 52.

The method get_training_examples returns the data shown in Figure 59, while the method
get_test_examples returns the data of Figure 60.

The method get_training_examples returns the data shown in Figure 60, while the method
get_test_examples returns the data of Figure 61.

Code Listing 51

from succinctly.datasets import *

The method get_training_examples returns the data shown in Figure, while the method
get_test_examples returns the data of Figure .

Code Listing 52

import numpy as np

def get_training_examples():
X1 = np.array([[8, 7], [4, 10], [9, 7], [7, 10],
[9, 6], [4, 8], [10, 10]])

104
y1 = np.ones(len(X1))
X2 = np.array([[2, 7], [8, 3], [7, 5], [4, 4],
[4, 6], [1, 3], [2, 5]])
y2 = np.ones(len(X2)) * -1
return X1, y1, X2, y2

def get_test_examples():
X1 = np.array([[2, 9], [1, 10], [1, 11], [3, 9], [11, 5],
[10, 6], [10, 11], [7, 8], [8, 8], [4, 11],
[9, 9], [7, 7], [11, 7], [5, 8], [6, 10]])
X2 = np.array([[11, 2], [11, 3], [1, 7], [5, 5], [6, 4],
[9, 4],[2, 6], [9, 3], [7, 4], [7, 2], [4, 5],
[3, 6], [1, 6], [2, 3], [1, 1], [4, 2], [4, 3]])
y1 = np.ones(len(X1))
y2 = np.ones(len(X2)) * -1
return X1, y1, X2, y2

A typical usage of this code is shown in Code Listing 53. It uses the method get_dataset from
Code Listing 54, which is loaded with the datasets package.

Code Listing 53

from succinctly.datasets import get_dataset, linearly_separable as ls

# Get the training examples of the linearly separable dataset.


X, y = get_dataset(ls.get_training_examples)

Code Listing 54

import numpy as np

def get_dataset(get_examples):
X1, y1, X2, y2 = get_examples()
X, y = get_dataset_for(X1, y1, X2, y2)
return X, y

def get_dataset_for(X1, y1, X2, y2):


X = np.vstack((X1, X2))
y = np.hstack((y1, y2))
return X, y

def get_generated_dataset(get_examples, n):


X1, y1, X2, y2 = get_examples(n)
X, y = get_dataset_for(X1, y1, X2, y2)
return X, y

105
Appendix B: The SMO Algorithm

Code Listing 55

import numpy as np
from random import randrange

# Written from the pseudo-code in:


# http://luthuli.cs.uiuc.edu/~daf/courses/optimization/Papers/smoTR.pdf
class SmoAlgorithm:
def __init__(self, X, y, C, tol, kernel, use_linear_optim):
self.X = X
self.y = y
self.m, self.n = np.shape(self.X)
self.alphas = np.zeros(self.m)

self.kernel = kernel
self.C = C
self.tol = tol

self.errors = np.zeros(self.m)
self.eps = 1e-3 # epsilon

self.b = 0

self.w = np.zeros(self.n)
self.use_linear_optim = use_linear_optim

# Compute the SVM output for example i


# Note that Platt uses the convention w.x-b=0
# while we have been using w.x+b in the book.
def output(self, i):
if self.use_linear_optim:
# Equation 1
return float(np.dot(self.w.T, self.X[i])) - self.b
else:
# Equation 10
return np.sum([self.alphas[j] * self.y[j]
* self.kernel(self.X[j], self.X[i])
for j in range(self.m)]) - self.b

# Try to solve the problem analytically.


def take_step(self, i1, i2):

106
if i1 == i2:
return False

a1 = self.alphas[i1]
y1 = self.y[i1]
X1 = self.X[i1]
E1 = self.get_error(i1)

s = y1 * self.y2

# Compute the bounds of the new alpha2.


if y1 != self.y2:
# Equation 13
L = max(0, self.a2 - a1)
H = min(self.C, self.C + self.a2 - a1)
else:
# Equation 14
L = max(0, self.a2 + a1 - self.C)
H = min(self.C, self.a2 + a1)

if L == H:
return False

k11 = self.kernel(X1, X1)


k12 = self.kernel(X1, self.X[i2])
k22 = self.kernel(self.X[i2], self.X[i2])

# Compute the second derivative of the


# objective function along the diagonal.
# Equation 15
eta = k11 + k22 - 2 * k12

if eta > 0:
# Equation 16
a2_new = self.a2 + self.y2 * (E1 - self.E2) / eta

# Clip the new alpha so that is stays at the end of the line.
# Equation 17
if a2_new < L:
a2_new = L
elif a2_new > H:
a2_new = H
else:
# Under unusual cicumstances, eta will not be positive.
# Equation 19
f1 = y1 * (E1 + self.b) - a1 * k11 - s * self.a2 * k12
f2 = self.y2 * (self.E2 + self.b) - s * a1 * k12 \
- self.a2 * k22
L1 = a1 + s(self.a2 - L)

107
H1 = a1 + s * (self.a2 - H)
Lobj = L1 * f1 + L * f2 + 0.5 * (L1 ** 2) * k11 \
+ 0.5 * (L ** 2) * k22 + s * L * L1 * k12
Hobj = H1 * f1 + H * f2 + 0.5 * (H1 ** 2) * k11 \
+ 0.5 * (H ** 2) * k22 + s * H * H1 * k12

if Lobj < Hobj - self.eps:


a2_new = L
elif Lobj > Hobj + self.eps:
a2_new = H
else:
a2_new = self.a2

# If alpha2 did not change enough the algorithm


# returns without updating the multipliers.
if abs(a2_new - self.a2) < self.eps * (a2_new + self.a2 \
+ self.eps):
return False

# Equation 18
a1_new = a1 + s * (self.a2 - a2_new)

new_b = self.compute_b(E1, a1, a1_new, a2_new, k11, k12, k22, y1)

delta_b = new_b - self.b

self.b = new_b

# Equation 22
if self.use_linear_optim:
self.w = self.w + y1*(a1_new - a1)*X1 \
+ self.y2*(a2_new - self.a2) * self.X2

# Update the error cache using the new Lagrange multipliers.


delta1 = y1 * (a1_new - a1)
delta2 = self.y2 * (a2_new - self.a2)

# Update the error cache.


for i in range(self.m):
if 0 < self.alphas[i] < self.C:
self.errors[i] += delta1 * self.kernel(X1, self.X[i]) + \
delta2 * self.kernel(self.X2,self.X[i]) \
- delta_b

self.errors[i1] = 0
self.errors[i2] = 0

self.alphas[i1] = a1_new
self.alphas[i2] = a2_new

108
return True

def compute_b(self, E1, a1, a1_new, a2_new, k11, k12, k22, y1):
# Equation 20
b1 = E1 + y1 * (a1_new - a1) * k11 + \
self.y2 * (a2_new - self.a2) * k12 + self.b

# Equation 21
b2 = self.E2 + y1 * (a1_new - a1) * k12 + \
self.y2 * (a2_new - self.a2) * k22 + self.b

if (0 < a1_new) and (self.C > a1_new):


new_b = b1
elif (0 < a2_new) and (self.C > a2_new):
new_b = b2
else:
new_b = (b1 + b2) / 2.0
return new_b

def get_error(self, i1):


if 0 < self.alphas[i1] < self.C:
return self.errors[i1]
else:
return self.output(i1) - self.y[i1]

def second_heuristic(self, non_bound_indices):


i1 = -1
if len(non_bound_indices) > 1:
max = 0

for j in non_bound_indices:
E1 = self.errors[j] - self.y[j]
step = abs(E1 - self.E2) # approximation
if step > max:
max = step
i1 = j
return i1

def examine_example(self, i2):


self.y2 = self.y[i2]
self.a2 = self.alphas[i2]
self.X2 = self.X[i2]
self.E2 = self.get_error(i2)

r2 = self.E2 * self.y2

if not((r2 < -self.tol and self.a2 < self.C) or


(r2 > self.tol and self.a2 > 0)):

109
# The KKT conditions are met, SMO looks at another example.
return 0

# Second heuristic A: choose the Lagrange multiplier which


# maximizes the absolute error.
non_bound_idx = list(self.get_non_bound_indexes())
i1 = self.second_heuristic(non_bound_idx)

if i1 >= 0 and self.take_step(i1, i2):


return 1

# Second heuristic B: Look for examples making positive


# progress by looping over all non-zero and non-C alpha,
# starting at a random point.
if len(non_bound_idx) > 0:
rand_i = randrange(len(non_bound_idx))
for i1 in non_bound_idx[rand_i:] + non_bound_idx[:rand_i]:
if self.take_step(i1, i2):
return 1

# Second heuristic C: Look for examples making positive progress


# by looping over all possible examples, starting at a random
# point.
rand_i = randrange(self.m)
all_indices = list(range(self.m))
for i1 in all_indices[rand_i:] + all_indices[:rand_i]:
if self.take_step(i1, i2):
return 1

# Extremely degenerate circumstances, SMO skips the first example.


return 0

def error(self, i2):


return self.output(i2) - self.y2

def get_non_bound_indexes(self):
return np.where(np.logical_and(self.alphas > 0,
self.alphas < self.C))[0]

# First heuristic: loop over examples where alpha is not 0 and not C
# they are the most likely to violate the KKT conditions
# (the non-bound subset).
def first_heuristic(self):
num_changed = 0
non_bound_idx = self.get_non_bound_indexes()
for i in non_bound_idx:
num_changed += self.examine_example(i)
return num_changed

110
def main_routine(self):
num_changed = 0
examine_all = True

while num_changed > 0 or examine_all:


num_changed = 0

if examine_all:
for i in range(self.m):
num_changed += self.examine_example(i)
else:
num_changed += self.first_heuristic()

if examine_all:
examine_all = False
elif num_changed == 0:
examine_all = True

Code Listing 56 demonstrates how to instantiate an SmoAlgorithm object, run the algorithm, and
print the result.

Code Listing 56

import numpy as np
from random import seed
from succinctly.datasets import linearly_separable, get_dataset
from succinctly.algorithms.smo_algorithm import SmoAlgorithm

def linear_kernel(x1, x2):


return np.dot(x1, x2)

def compute_w(multipliers, X, y):


return np.sum(multipliers[i] * y[i] * X[i] for i in range(len(y)))

if __name__ == '__main__':
seed(5) # to have reproducible results

X_data, y_data = get_dataset(linearly_separable.get_training_examples)


smo = SmoAlgorithm(X_data, y_data, C=10, tol=0.001,
kernel=linear_kernel, use_linear_optim=True)

smo.main_routine()

w = compute_w(smo.alphas, X_data, y_data)

111
print('w = {}'.format(w))

# -smo.b because Platt uses the convention w.x-b=0


print('b = {}'.format(-smo.b))

# w = [0.4443664 1.1105648]
# b = -9.66268641132

112
Bibliography

Abu-Mostafa, Y. S. (2012). Learning From Data. AMLBook.


Biernat, E., & Lutz, M. (2016). Data science: fondamentaux et études de cas. Eyrolles.
Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.
Boyd, S., & Vandenberghe, L. (2004). Convex Optimization. Cambridge University Press.
Burges, C. J. (1988). A Tutorial on Support Vector Machines for Pattern. Data Mining and
Knowledge Discovery, 121-167.
Crammer, K., & Singer, Y. (2001). On the Algorithmic Implementation of Multiclass Kernel-
based Vector Machines. Journal of Machine Learning Research 2.
Cristianini, N., & Shawe-Taylor, J. (2000). An Introduction to Support Vector Machines.
Cambridge University Press.
Dogan, U., Glasmachers, T., & Igel, C. (2011). Fast Training of Multi-Class Support Vector
Machines.
El Ghaoui, L. (2015). Optimization Models and Applications. Retrieved from
http://livebooklabs.com/keeppies/c5a5868ce26b8125
Gershwin, S. B. (2010). KKT Examples. Retrieved from MIT Mechanical Engineering Course:
http://ocw.mit.edu/courses/mechanical-engineering/2-854-introduction-to-manufacturing-
systems-fall-2010/lecture-notes/MIT2_854F10_kkt_ex.pdf
Gretton, A. (2016, 03 05). Lecture 9: Support Vector Machines . Retrieved from
http://www.gatsby.ucl.ac.uk/~gretton/coursefiles/Slides5A.pdf
Han, X., & Berg, A. C. (2012). DCMSVM: Distributed Parallel Training For Single-Machine
Multiclass Classifiers.
Hsu, C.-W., & Lin, C.-J. (2002). A Comparison of Methods for Multi-class Support Vector
Machines. IEEE transactions on neural networks.
Hsu, C.-W., Chang, C.-C., & Lin, C.-J. (2016, 10 02). A Practical Guide to Support Vector
Classification. Retrieved from LIBSVM website:
http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf
Ng, A. (n.d.). CS229 Lecture notes - Part V Support Vector Machines. Retrieved from
http://cs229.stanford.edu/notes/cs229-notes3.pdf
Osuna, E., Freund, R., & Girosi, F. (1997). An Improved Training Algorithm for Support Vector.
Proceedings of IEEE NNSP'97.
Platt, J. C. (1998). Sequential Minimal Optimization: A Fast Algorithm for Training Support
Vector Machines. Microsoft Research.

113
Platt, J. C., Cristianini, N., & Shawe-Taylor, J. (2000). Large margin DAGs for multiclass
classification. MIT Press.
Rojas, R. (1996). Neural Networks: A Systematic Introduction. Springer.
Russell, S., & Norvig, P. (2010). Artificial Intelligence: A Modern Approach. Pearson.
Tyson Smith, B. (2004). Lagrange Multipliers Tutorial in the Context of Support Vector
Machines. Newfoundland.
Vapnik, V. (1982). Estimation of Dependences Based on Empirical Data. Springer.
Vapnik, V. N. (1998). Statistical Learning Theory. Wiley.
Weston, J., & Watkins, C. (1999). Support Vector Machines for Multi-Class Pattern Recognition.
Proceedings of the Seventh European Symposium on Artificial Neural Networks.

114

You might also like