0% found this document useful (0 votes)
52 views33 pages

SVM Class

SVM is a machine learning algorithm that uses a maximum-margin hyperplane to divide data points into two classes. It works by finding the hyperplane that maximizes the margin between the two classes while minimizing misclassifications. The data points that lie on the margin or are misclassified are called support vectors. SVM can be extended to non-linear classification using kernel methods that map inputs to high-dimensional feature spaces. Solving the SVM optimization problem results in a dual problem that involves Lagrange multipliers, and the support vectors can be expressed as a linear combination of the training examples with non-zero multipliers.

Uploaded by

physics lover
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views33 pages

SVM Class

SVM is a machine learning algorithm that uses a maximum-margin hyperplane to divide data points into two classes. It works by finding the hyperplane that maximizes the margin between the two classes while minimizing misclassifications. The data points that lie on the margin or are misclassified are called support vectors. SVM can be extended to non-linear classification using kernel methods that map inputs to high-dimensional feature spaces. Solving the SVM optimization problem results in a dual problem that involves Lagrange multipliers, and the support vectors can be expressed as a linear combination of the training examples with non-zero multipliers.

Uploaded by

physics lover
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 33

Introduction to Support

Vector Machines
History of SVM
 SVM is related to statistical learning theory [3]
 SVM was first introduced in 1992 [1]

 SVM becomes popular because of its success in

handwritten digit recognition


 1.1% test error rate for SVM. This is the same as the error
rates of a carefully constructed neural network, LeNet 4.
 See Section 5.11 in [2] or the discussion in [3] for details
 SVM is now regarded as an important example of
“kernel methods”, one of the key area in machine
learning
 Note: the meaning of “kernel” is different from the “kernel”
[1] B.E. Boser etfunction for Parzen
al. A Training Algorithm for Optimalwindows
Margin Classifiers. Proceedings of the Fifth Annual Workshop on
Computational Learning Theory 5 144-152, Pittsburgh, 1992.
[2] L. Bottou et al. Comparison of classifier methods: a case study in handwritten digit recognition. Proceedings of the 12th
IAPR International Conference on Pattern Recognition, vol. 2, pp. 77-82.
[3] V. Vapnik. The Nature of Statistical Learning Theory. 2 nd edition, Springer, 1999.

24/2/23 2
Linear Classifiers Estimation:
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1
w: weight vector
x: data vector

How would you


classify this data?

24/2/23 3

Linear Classifiers
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1

How would you


classify this data?

24/2/23 4

Linear Classifiers
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1

How would you


classify this data?

24/2/23 5

Linear Classifiers
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1

How would you


classify this data?

24/2/23 6

Linear Classifiers
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1

Any of these
would be fine..

..but which is
best?

24/2/23 7

Classifier Margin
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1 Define the margin
of a linear
classifier as the
width that the
boundary could be
increased by
before hitting a
datapoint.

24/2/23 8

Maximum Margin
x f yest
f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1 The maximum
margin linear
classifier is the
linear classifier
with the, um,
maximum margin.
This is the
simplest kind of
SVM (Called an
LSVM)
Linear SVM
24/2/23 9

Maximum Margin
x f yest
f(x,w,b) = sign(w. x + b)
denotes +1
denotes -1 The maximum
margin linear
classifier is the
linear classifier
Support Vectors with the, um,
are those
datapoints that maximum margin.
the margin
This is the
pushes up
against simplest kind of
SVM (Called an
LSVM)
Linear SVM
24/2/23 10
Why Maximum Margin?

f(x,w,b) = sign(w. x - b)
denotes +1
denotes -1 The maximum
margin linear
classifier is the
linear classifier
Support Vectors with the, um,
are those
datapoints that maximum margin.
the margin
This is the
pushes up
against simplest kind of
SVM (Called an
LSVM)

24/2/23 11
How to calculate the distance from a point to a line?

denotes +1
denotes -1 x wx +b = 0

X – Vector
W
W – Normal Vector
b – Scale Value

 http://mathworld.wolfram.com/Point-LineDistance2-Dimensional.html
 In our case, w *x +w *x +b=0,
1 1 2 2

 thus, w=(w1,w2), x=(x1,x2)

24/2/23 12
Estimate the Margin
denotes +1
denotes -1 x wx +b = 0

X – Vector
W
W – Normal Vector
b – Scale Value

 What is the distance expression for a point x to a line


wx+b= 0?
xw b xw b
d ( x)  

2 d 2
w w
i 1 i
2

24/2/23 13
Large-margin Decision Boundary
 The decision boundary should be as far away from the
data of both classes as possible
 We should maximize the margin, m

 Distance between the origin and the line wtx=-b is b/||w||

Class 2

Class 1
m

24/2/23 14
Finding the Decision Boundary
 Let {x1, ..., xn} be our data set and let yi  {1,-1} be
the class label of xi
 The decision boundary should classify all points correctly

 To see this: when y=-1, we wish (wx+b)<1, when y=1,

we wish (wx+b)>1. For support vectors, we wish


y(wx+b)=1.
 The decision boundary can be found by solving the

following constrained optimization problem

24/2/23 15
Next step… Optional
 Converting SVM to a form we can solve
 Dual form
 Allowing a few errors
 Soft margin
 Allowing nonlinear boundary
 Kernel functions

24/2/23 16
The Dual Problem (we ignore the derivation)
 The new objective function is in terms of i only
 It is known as the dual problem: if we know w, we
know all i; if we know all i, we know w
 The original problem is known as the primal problem
 The objective function of the dual problem needs to be

maximized!
 The dual problem is therefore:

Properties of i when we introduce The result when we differentiate the


the Lagrange multipliers original Lagrangian w.r.t. b
24/2/23 17
The Dual Problem

 This is a quadratic programming (QP) problem


 A global maximum of i can always be found

 w can be recovered by

24/2/23 18
Characteristics of the Solution
 Many of the i are zero (see next page for example)
 w is a linear combination of a small number of data points
 This “sparse” representation can be viewed as data

compression as in the construction of knn classifier


 xi with non-zero i are called support vectors (SV)
 The decision boundary is determined only by the SV
 Let t (j=1, ..., s) be the indices of the s support vectors. We
j
can write
 For testing with a new data z
 Compute and
classify z as class 1 if the sum is positive, and class 2
otherwise
 Note: w need not be formed explicitly
24/2/23 19
A Geometrical Interpretation

Class 2

8=0.6 10=0

7=0
2=0
5=0
1=0.8
4=0
6=1.4
9=0
3=0
Class 1

24/2/23 20
Extension to Non-linear Decision Boundary
So far, we have only considered large-
margin classifier with a linear decision
boundary
How to generalize it to become nonlinear?

Key idea: transform x to a higher


i
dimensional space to “make life easier”
 Input space: the space the point xi are
located
 Feature space: the space of (x ) after
i
transformation
24/2/23 21
Transforming the Data (c.f. DHS Ch. 5)
( )
( ) ( )
( ) ( ) ( )
(.) ( )
( ) ( )
( ) ( )
( ) ( )
( ) ( ) ( )
( )
( )

Input space Feature space


Note: feature space is of higher dimension
than the input space in practice

 Computation in the feature space can be costly because it is


high dimensional
 The feature space is typically infinite-dimensional!
 The kernel trick comes to rescue

24/2/23 22
The Kernel Trick
 Recall the SVM optimization problem

 The data points only appear as inner product


 As long as we can calculate the inner product in the

feature space, we do not need the mapping explicitly


 Many common geometric operations (angles, distances)

can be expressed by inner products


 Define the kernel function K by

24/2/23 23
An Example for (.) and K(.,.)
 Suppose (.) is given as follows

 An inner product in the feature space is

 So, if we define the kernel function as follows, there is


no need to carry out (.) explicitly

 This use of kernel function to avoid carrying out (.)


explicitly is known as the kernel trick

24/2/23 24
More on Kernel Functions
 Not all similarity measures can be used as kernel
function, however
 The kernel function needs to satisfy the Mercer function,
i.e., the function is “positive-definite”
 This implies that
 the n by n kernel matrix,
 in which the (i,j)-th entry is the K(xi, xj), is always positive
definite
 This also means that optimization problem can be solved
in polynomial time!

24/2/23 25
Examples of Kernel Functions

 Polynomial kernel with degree d

 Radial basis function kernel with width 

 Closely related to radial basis function neural networks


 The feature space is infinite-dimensional

 Sigmoid with parameter  and 

 It does not satisfy the Mercer condition on all  and 

24/2/23 26
Non-linear SVMs: Feature spaces
 General idea: the original input space can always be mapped to
some higher-dimensional feature space where the training set is
separable:

Φ: x → φ(x)

24/2/23 27
Example
 Suppose we have 5 one-dimensional data points
 x1=1, x2=2, x3=4, x4=5, x5=6, with 1, 2, 6 as class 1 and 4,
5 as class 2  y1=1, y2=1, y3=-1, y4=-1, y5=1
 We use the polynomial kernel of degree 2
 K(x,y) = (xy+1)2
 C is set to 100

 We first find i (i=1, …, 5) by

24/2/23 28
Example

Value of discriminant function

class 1 class 2 class 1

1 2 4 5 6

24/2/23 29
Conclusion
 SVM is a useful alternative to neural networks
 Two key concepts of SVM: maximize the margin and the

kernel trick
 Many SVM implementations are available on the web for

you to try on your data set!

24/2/23 30
Resources
 http://www.kernel-machines.org/
 http://www.support-vector.net/

 http://www.support-vector.net/icml-tutorial.pdf

 http://www.kernel-machines.org/papers/tutorial-

nips.ps.gz
 http://www.clopinet.com/isabelle/Projects/SVM/

applist.html

24/2/23 31
Appendix: Distance from a point to a line
 Equation for the line: let u be a variable, then any point
on the line can be described as:
 P = P1 + u (P2 - P1)
 Let the intersect point be u, P2
 Then, u can be determined by:
P
 The two vectors (P2-P1) is orthogonal to P3-u:
 That is,

 (P3-P) dot (P2-P1) =0


 P=P1+u(P2-P1)
P3
 P1=(x1,y1),P2=(x2,y2),P3=(x3,y3) P1

24/2/23 32
Distance and margin

x = x1 + u (x2 - x1)

y = y1 + u (y2 - y1)

 The distance therefore between the point P3 and the


line is the distance between P=(x,y) above and P3
 Thus,

 d= |(P3-P)|

24/2/23 33

You might also like