0% found this document useful (0 votes)
3 views28 pages

Lec5 Support Vector Machine

The document provides an overview of Support Vector Machines (SVM), focusing on their ability to classify linearly separable binary sets by designing a hyperplane that maximizes the margin between classes. It discusses the mathematical formulation of SVM as a minimization problem and introduces the kernel trick for handling non-linear separability by mapping data into higher-dimensional spaces. Various kernel functions, including linear, polynomial, and radial basis functions, are mentioned, along with their applications in classification tasks.

Uploaded by

Mohamed Sadek
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views28 pages

Lec5 Support Vector Machine

The document provides an overview of Support Vector Machines (SVM), focusing on their ability to classify linearly separable binary sets by designing a hyperplane that maximizes the margin between classes. It discusses the mathematical formulation of SVM as a minimization problem and introduces the kernel trick for handling non-linear separability by mapping data into higher-dimensional spaces. Various kernel functions, including linear, polynomial, and radial basis functions, are mentioned, along with their applications in classification tasks.

Uploaded by

Mohamed Sadek
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Neural network and learning

machines

Support Vector Machine – SVM


Instructor: Ahmed Yousry
overvi
ew
• SVM for linearly separable binary set
•Main Goal to design a hyper plane that classify all training
vectors into two classes
•The best model that leaves the maximum margin from both
classes
•the two classes labels +1 (positive examples and -1 (negative
examples)
X2

X1
overvi
ew
Intuition behind
SVM
Margin in terms
of W
Svm as a minimization
problem

Quadratic
problem
Linear
constrain

9
We wish to find the w and b which minimizes, and the α which maximizes LP(whilst keeping αi
≥ 0 ∀i.) We can do this by differentiating LP with respect to w and b and setting
the derivatives to zero:
A Geometrical
Interpretation
Class 2

10=0
8=0.6

7=0
2=0
5=0

1=0.8
4=0
6=1.4

9=0
3=0
Class 1
Exampl
e
Exampl
e
Examp
le
Examp
le
Examp
le
Kernel
trick
Non-linear SVMs: Feature
spaces
• General idea: the original feature space can
always be mapped to some higher-dimensional
feature space where the training set is
separable:

Φ: x → φ(x)

27
svm for nonlinear
reparability
Kernel
• Why use kernels? s
• Make non-separable problem separable.
• Map data into better representational space
• Common kernels
• Linear
• Polynomial K(x,z) = (1+xTz)d
• Gives feature conjunctions
• Radial basis function (infinite dimensional space)

• Haven’t been very useful in text classification


30
Thanks

You might also like