0% found this document useful (0 votes)
8 views14 pages

Support Vector Machine

Machine learning

Uploaded by

www.vjnshylasree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views14 pages

Support Vector Machine

Machine learning

Uploaded by

www.vjnshylasree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 14

SUPPORT VECTOR MACHINE

SUPPORT VECTOR MACHINE (SVM)


CLASSIFIER

• SVM is a supervised machine learning algorithm.

• Can be used for both classification or regression challenges

• It is mostly used in classification problems.

• Each data item as a point in n-dimensional space (n is number of features)

• With the value of each feature being the value of a particular coordinate.
PROPERTIES OF SVM
• Flexibility in choosing a similarity function
• Sparseness of solution when dealing with large data sets
- only support vectors are used to specify the separating hyperplane
• Ability to handle large feature spaces
- complexity does not depend on the dimensionality of the feature space
• Overfitting can be controlled by soft margin approach
• Nice math property: a simple convex optimization problem which is
guaranteed to converge to a single global solution
• Feature Selection
Support Vector Machines
• The line that maximizes the minimum margin
is a good bet.
• The model class of “hyper-planes with a
margin of m” has a low VC dimension if m
is big.
• This maximum-margin separator is
determined by a subset of the datapoints.
• Datapoints in this subset are called
“support vectors”.
• It will be useful computationally if only a
small fraction of the datapoints are
support vectors, because we use the
support vectors to decide which side of the
separator a test case is on.
Why do SVMs Generalize?
• Even though they map to a very high-dimensional space
–They have a very strong bias in that space
–The solution has to be a linear combination of the training
instances
•Large theory on Structural Risk Minimization providing bounds on the
error of an SVM
–Typically the error bounds too loose to be of practical use
CONCLUSIONS

• SVMs formulate learning as a mathematical program taking advantage


of the rich theory in optimization

•SVM uses kernels to map indirectly to extremely high dimensional


spaces

•SVMs are extremely successful, robust, efficient, and versatile, and


have a good theoretical basis

You might also like