Separating Hyperplanes in SVM
Last Updated :
15 Sep, 2021
Support Vector Machine is the supervised machine learning algorithm, that is used in both classification and regression of models. The idea behind it is simple to just find a plane or a boundary that separates the data between two classes.
Support Vectors:
Support vectors are the data points that are close to the decision boundary, they are the data points most difficult to classify, they hold the key for SVM to be optimal decision surface. The optimal hyperplane comes from the function class with the lowest capacity i.e minimum number of independent features/parameters.
Separating Hyperplanes:
Below is an example of a scatter plot:
In the above scatter, Can we find a line that can separate two categories. Such a line is called separating hyperplane. So, why it is called a hyperplane, because in 2-dimension, it's a line but for 1-dimension it can be a point, for 3-dimension it is a plane, and for 3 or more dimensions it is a hyperplane
Now, we understand the hyperplane, we also need to find the most optimized hyperplane. The idea behind that this hyperplane should farthest from the support vectors. This distance b/w separating hyperplanes and support vector known as margin. Thus, the best hyperplane will be whose margin is the maximum.
Generally, the margin can be taken as 2*p, where p is the distance b/w separating hyperplane and nearest support vector. Below is the method to calculate linearly separable hyperplane.
A separating hyperplane can be defined by two terms: an intercept term called b and a decision hyperplane normal vector called w. These are commonly referred to as the weight vector in machine learning. Here b is used to select the hyperplane i.e perpendicular to the normal vector. Now since all the plane x in the hyperplane should satisfy the following equation:
w^{T} \cdot x = -b
Now, consider the training D such that \mathbb{D} = \left \{ \left ( \vec{x_i}, y_i \right ) \right \} where \vec{x_i} \, y_i represents the n-dimesnsional data point and class label respectively. The value of class label here can be only either be -1 or +1 (for 2-class problem). The linear classifier is then:
f(\vec{x}) = sign(\vec{w^{T}}\vec{x} + b)
However, the functional margin is by definition of above is unconstraint, so we need to formulize the distance b/w a data point x and the decision boundary. The shortest distance b/w them is of course the perpendicular distance i.e parallel to the normal vector \vec{w}. A unitary vector in the direction of this normal vector is given by \frac {\vec{w}}{\left \| \vec{w} \right \|}. Now,
\vec{x^{'}} can be defined as:
\vec{x^{'}} = \vec{x} - yr\frac{\vec{w}}{\left \| \vec{w} \right \|}
Replace x' by x in the linear classifier equation gives:
\vec{w^{T}}\left ( \vec{x} - yr\frac{\vec{w}}{\left \| \vec{w} \right \|}\right ) + b = 0
Now, solving for r gives following equation:
r = y \frac{\vec{w^{T}}\vec{x} + b}{\left \| \vec{w} \right \|}
where, r is the margin. Now, since the
\left \| w \right \| = 1 . The distance equation for a data point to hyperplane for all items in the data could be written as:
y_i(\vec{w^{T}}\vec{x_i} +b) \geq 1
or, the above equation for each data point:
r_i = y_i \frac{\vec{w^{T}}\vec{x_i} + b}{\left \| \vec{w} \right \|}
Here, the geometric margin is:
\rho = \frac{2}{||\vec{w}||}
We need to maximize the geometric margin such that:
\rho =\frac{2}{\left \| \vec{w} \right \|} \forall \left ( \vec{x_i}, y_i \right) \in \mathbb(D); y_i(\vec{w^{T}}\vec{x_i} +b) \geq 1
Maximizing \rho is same as minimizing the \frac{1}{\rho} = \frac{||w||}{2} that is, we need to find w and b such that:
\frac{1}{2}\vec{w^{T}}\vec{w} is minimum \forall \left ( \vec{x_i}, y_i \right) \in \mathbb{D}; y_i(\vec{w^{T}}\vec{x_i} +b) \geq 1
Here, we are optimizing a quadratic equation with linear constraint. Now, this leads us to find the solution dual problems.
Duality Problem:
In optimization, the duality principle states that optimization problems can either be viewed from a different perspective: the primal problem and the dual problem The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem.
An optimization problem can be typically written as:
minimize_{x} \, \, f(x) \\ subject \, to \,\, g_i (x) =0 , \,\,\, i= 1,....,p \\ h_i(x) \leq 0 \,\, i= 1,....,m
where, f is objective function g and h are constraint function. The above problem can be solved by a technique such as Lagrange multipliers.
Lagrange multipliers
Lagrange multiplier is a way of finding local minima and maxima for the functions with an equality constraint. Lagrange multipliers can be described for the
In Lagrangian equation:
\nabla f (x,y) = \nabla \lambda g(x,y) \\ or \\ \nabla f (x,y) - \nabla \lambda g(x,y) =0 \\
Suppose, we define the function such that
\nabla L(x,y, \lambda) = \nabla f (x,y) - \nabla \lambda g(x,y)
The above function is known as Lagrangian, now, we need to find \nabla L(x,y, \lambda) is 0 i.e point where gradient of functions f and g are parallel.
Example
Consider having three points with points (1,2) and (2,0) belonging to one class and (3,2) belonging to another, geometrically, we can observe that the maximum margin line will be parallel to line connecting points of the two classes. (1,1) and (2,3) given a weight vector as (1,2). The optimal decision surface (separating hyperplane) will intersect at (1.5,2). Now, we can calculate bias using this conclusion:
y = x_1 + 2x_2 +b \\ \\ y=0 \\ x_1 =1.5 \\ x_2 =2 \\ \\ 0 = 1.5+ 4 +b \\ \\ b=- 5.5
.Now, the decision surface equation becomes:
y = x_1 + 2x_2 -5.5
Now, since sign(y_i(w^{T}x_i +b)) \geq 1 , to minimize the |\vec{w}|, we need to check for the equality constraint or the support vectors. Let's take w=(a, 2a) for some a such that:
a + 2a + b = -1 \, for \, point \,(1,1) \\ 2a + 6a + b = 1 \, for \, point \,(2,3) \\
Solving above equation gives:
a =\frac{2}{5}; \, b= \frac{-11}{5}
this means the margin becomes:
\rho = \frac{2}{||\vec{w}||} \\ = \frac{2}{\sqrt{\frac{4}{25}+ \frac{16}{25}}} \\ = \frac{2}{\frac{2\sqrt{5}}{5}} \\ = \sqrt{5}\\ 2.23
Implementation
In this implementation, we will verify the above example using the sklearn library and tried to model the above example:
Python3
# Import Necessary libraries/functions
import numpy as np
import matplotlib.pyplot as plt
from sklearn.svm import SVC
# define the dataset
X = np.array([[1,1],
[2,0],
[2,3]])
Y = np.array([0,0,1])
# define support vector classifier with linear kernel
clf = SVC(gamma='auto', kernel ='linear')
# fit the above data in SVC
clf.fit(X,Y)
# plot the decision boundary ,data points,support vector etcv
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(0,12)
yy = a * xx - clf.intercept_[0] / w[1]
y_neg = a * xx - clf.intercept_[0] / w[1] + 1
y_pos = a * xx - clf.intercept_[0] / w[1] - 1
plt.figure(1,figsize= (15, 10))
plt.plot(xx, yy, 'k',
label=f"Decision Boundary (y ={w[0]}x1 + {w[1]}x2 {clf.intercept_[0] })")
plt.plot(xx, y_neg, 'b-.',
label=f"Neg Decision Boundary (-1 ={w[0]}x1 + {w[1]}x2 {clf.intercept_[0] })")
plt.plot(xx, y_pos, 'r-.',
label=f"Pos Decision Boundary (1 ={w[0]}x1 + {w[1]}x2 {clf.intercept_[0] })")
for i in range(3):
if (Y[i]==0):
plt.scatter(X[i][0], X[i][1],color='red', marker='o', label='negative')
else:
plt.scatter(X[i][0], X[i][1],color='green', marker='x', label='positive')
plt.legend()
plt.show()
# calculate margin
print(f'Margin : {2.0 /np.sqrt(np.sum(clf.coef_ ** 2)) }')
Margin : 2.236
FInal SVM decision boundaryReferences:
Similar Reads
Machine Learning Tutorial Machine learning is a branch of Artificial Intelligence that focuses on developing models and algorithms that let computers learn from data without being explicitly programmed for every task. In simple words, ML teaches the systems to think and understand like humans by learning from the data.Do you
5 min read
Introduction to Machine Learning
Python for Machine Learning
Machine Learning with Python TutorialPython language is widely used in Machine Learning because it provides libraries like NumPy, Pandas, Scikit-learn, TensorFlow, and Keras. These libraries offer tools and functions essential for data manipulation, analysis, and building machine learning models. It is well-known for its readability an
5 min read
Pandas TutorialPandas is an open-source software library designed for data manipulation and analysis. It provides data structures like series and DataFrames to easily clean, transform and analyze large datasets and integrates with other Python libraries, such as NumPy and Matplotlib. It offers functions for data t
6 min read
NumPy Tutorial - Python LibraryNumPy (short for Numerical Python ) is one of the most fundamental libraries in Python for scientific computing. It provides support for large, multi-dimensional arrays and matrices along with a collection of mathematical functions to operate on arrays.At its core it introduces the ndarray (n-dimens
3 min read
Scikit Learn TutorialScikit-learn (also known as sklearn) is a widely-used open-source Python library for machine learning. It builds on other scientific libraries like NumPy, SciPy and Matplotlib to provide efficient tools for predictive data analysis and data mining.It offers a consistent and simple interface for a ra
3 min read
ML | Data Preprocessing in PythonData preprocessing is a important step in the data science transforming raw data into a clean structured format for analysis. It involves tasks like handling missing values, normalizing data and encoding variables. Mastering preprocessing in Python ensures reliable insights for accurate predictions
6 min read
EDA - Exploratory Data Analysis in PythonExploratory Data Analysis (EDA) is a important step in data analysis which focuses on understanding patterns, trends and relationships through statistical tools and visualizations. Python offers various libraries like pandas, numPy, matplotlib, seaborn and plotly which enables effective exploration
6 min read
Feature Engineering
Supervised Learning
Unsupervised Learning
Model Evaluation and Tuning
Advance Machine Learning Technique
Machine Learning Practice