0% found this document useful (0 votes)
159 views2 pages

Support Vector Machine (SVM)

Support vector machines (SVMs) are classifiers defined by a separating hyperplane that divides data points into classes. SVMs find an optimal hyperplane that maximizes the margin between the classes to perform classification. SVMs can use kernel tricks like polynomial or exponential kernels to find separation in higher dimensional spaces when the classes are not linearly separable in the original input space. The parameters for SVMs, like the kernel type, regularization, gamma, and margin, must be tuned to achieve the best classification performance on new data points.

Uploaded by

PAWAN TIWARI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
159 views2 pages

Support Vector Machine (SVM)

Support vector machines (SVMs) are classifiers defined by a separating hyperplane that divides data points into classes. SVMs find an optimal hyperplane that maximizes the margin between the classes to perform classification. SVMs can use kernel tricks like polynomial or exponential kernels to find separation in higher dimensional spaces when the classes are not linearly separable in the original input space. The parameters for SVMs, like the kernel type, regularization, gamma, and margin, must be tuned to achieve the best classification performance on new data points.

Uploaded by

PAWAN TIWARI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Support Vector Machine (SVM)

A support vector machine (SVM) is a discriminative classifier formally defined


by a separating hyperplane. In other words, we have given a labelled training data and
the algorithm give an output in form of optimal hyperplanes which categorizes new
examples. In 2-D space this hyperplane is a line dividing a plane into two parts where
each class lay in either side of the line.
Explanation: -
Can we define separating line for the classes mentioned in below image? For this we
simply draw a line in between the data classes.

This is called the separation of classes. That’s what SVM does in reality. It finds out an
optimal hyperplane in multidimensional space that separate out the given classes.

We apply transformation of axes here and add one more dimension to separate out
classes with more clarity and we will call this dimension as z-axis.
Let’s assume value of points at z-plane is, 𝑊 = 𝑥 + 𝑦 . This can be assumed as the
distance of point from z-origin. Now if we plot the hyperplane in z-axis, a clear
separation is visible and a line can be drawn.
These transformations are basically called as “Kernel”.
But there is a trade-off exist in SVM. In real world applications, finding the perfect class
prediction for millions of datasets takes a lot of time. This problem is due to the
regularization of parameters.
Tuning parameters for SVM: -
1. Kernel: -
It defines whether we want a linear of linear separation or not?
For linear kernel the equation for prediction for a new input use the dot product
between the input (x) and each SVM point (𝑥 ) and it calculated as:
𝑓(𝑥) = (𝐵(0)) + 𝑎 ∗ (𝑥, 𝑥 )
For the polynomial kernel the above formula can be written as: -
𝐾(𝑥, 𝑥 ) = 1 + (𝑥 ∗ 𝑥 )
For exponential kernel the above formula can be written as: -
𝐾(𝑥, 𝑥 ) = 𝑒 ∗∑
The polynomial and exponential kernels calculate the separation line in higher
dimensions. This separation in higher dimension is called kernel trick.
2. Regularization: -
It is also often called as C-parameter and it tells the SVM optimization that how
much you want to avoid misclassifying each training sample.
For larger value of C, the optimization will choose a smaller-margin hyperplane.
Conversely, a very small value of C will cause the optimizer to look for a larger-
margin hyperplane.
3. Gamma: -
The gamma parameters define how for the influence of a single training example
reaches, with low values means “far” and high value means “near”.
With low gamma value, points which is far away from plausible line are
considered in calculation for the separation line. Whereas high gamma value
means the points closer to plausible separation line are considered in former
calculation.
4. Margin: -
SVM tries to achieve a good margin hyperplane. A margin is basically a
separation of line to the closest class points.
A good margin is one where this separation is larger for both the classes. It allows
the points to be in their respective classes without crossing to other class.

You might also like