0% found this document useful (0 votes)
15 views9 pages

Support Vector Machine

Support Vector Machine (SVM) is a supervised learning algorithm used for classification and regression, aiming to create an optimal hyperplane that separates data into classes. It identifies support vectors, which are the closest points to the hyperplane, and maximizes the margin between these vectors and the hyperplane. SVM can be categorized into Linear SVM for linearly separable data and Non-Linear SVM for data that requires additional dimensions for classification.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views9 pages

Support Vector Machine

Support Vector Machine (SVM) is a supervised learning algorithm used for classification and regression, aiming to create an optimal hyperplane that separates data into classes. It identifies support vectors, which are the closest points to the hyperplane, and maximizes the margin between these vectors and the hyperplane. SVM can be categorized into Linear SVM for linearly separable data and Non-Linear SVM for data that requires additional dimensions for classification.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 9

INTRODUCTION

• Support Vector Machine (SVM) is one of the most


popular Supervised Learning algorithms, which is
used for Classification as well as Regression
problems.

• The goal of the SVM algorithm is to create the best


line or decision boundary that can segregate n-
dimensional space into classes so that we can
easily put the new data point in the correct
category in the future. This best decision boundary
is called a hyperplane.
Diagram in which
there are two
different categories
that are classified
using a decision
boundary or
hyperplane:
support vector
creates a decision
boundary between
these two data (cat
and dog) and choose
extreme cases
(support vectors), it
will see the extreme
case of cat and dog.
On the basis of the
support vectors, it
will classify it as a
cat.
•SVM algorithm helps to find the
best line or decision boundary;
this best boundary or region is
called as a hyperplane. SVM
algorithm finds the closest point
of the lines from both the
classes. These points are called
support vectors. The distance
between the vectors and the
hyperplane is called as margin.
And the goal of SVM is to
maximize this margin.
The hyperplane with maximum
margin is called the optimal
hyperplane.
Linear SVM: Linear SVM is used for
linearly separable data, which means
if a dataset can be classified into two
classes by using a single straight
line, then such data is termed as
Types of linearly separable data, and classifier
is used called as Linear SVM
SVM classifier.
Non-linear SVM: Non-Linear SVM is
used for non-linearly separated data,
which means if a dataset cannot be
classified by using a straight line,
then such data is termed as non-
linear data and classifier used is
called as Non-linear SVM classifier.
Linear SVM
• Example: We have a
dataset that has two tags
(green and blue), and the
dataset has two features x1
and x2. We want a classifier
that can classify the pair(x1,
x2) of coordinates in either
green or blue. It is 2-d space
so by using a straight line
can separate these two
classes.
Non-Linear SVM
• Non-linear data, we cannot
draw a single straight line. So
to separate these data points,
we need to add one more
dimension. For linear data, we
have used two dimensions x
and y, so for non-linear data,
we will add a third dimension z.
• It can be calculated as:
Z=x2+y2
Non-Linear SVM
(Contd..)

•Now, SVM will


divide the datasets
into classes in the
following way.
Consider the image
Non-Linear
SVM(Contd..)

we are in 3-d space, hence it is


looking like a plane parallel to
the x-axis. If we convert it in
2d space with z=1, then we
get a circumference of radius
1 in case of non-linear data.

You might also like