Experiment 2.3 SVM Classifier
Experiment 2.3 SVM Classifier
COURSE OUTCOMES
CO4 Evaluate machine learning model’s performance and apply learning strategy to
improve the performance of supervised and unsupervised learning model.
CO5 Develop a suitable model for supervised and unsupervised learning algorithm and
optimize the model on the expected accuracy.
The goal of the SVM algorithm is to create the best line or decision boundary that can segregate
n-dimensional space into classes so that we can easily put the new data point in the correct
category in the future. This best decision boundary is called a hyperplane.
SVM chooses the extreme points/vectors that help in creating the hyperplane. These extreme
cases are called as support vectors, and hence algorithm is termed as Support Vector Machine.
Consider the below diagram in which there are two different categories that are classified using a
decision boundary or hyperplane:
Program
from sklearn import svm
X = [[0, 0], [1, 1]]
y = [0, 1]
clf = svm.SVC()
clf.fit(X, y)
print(clf.predict([2,2]))
print(clf.predict([[48, 23]]))
print(clf.predict([[50, 27]]))
When the height was 48 and weight was 23 then output is 0 i.e. Cat
When the height was 50 and weight was 27 then output is 1 i.e. Dog
Viva Questions
1. What is the geometric intuition behind SVM?
2. What do know about Hard Margin SVM and Soft Margin SVM?
3. What is Hinge Loss?
4. Explain the Dual form of SVM formulation?
5. What's the “kernel trick” and how is it useful?