SVM-1-SummaryNotes (1)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

SVM-Introduction

- Support Vector Machine (SVM) is a powerful supervised machine learning


algorithm.
- It works by finding the optimal hyperplane that best separates data points into
different classes while maximizing the margin between the classes, making it
highly effective in high-dimensional spaces.

What is the key idea behind SVM?


𝑇
- The best hyperplane π: 𝑤 𝑥 + 𝑏 = 0 classifying between 2 classes is
the one that has maximum gap/margin (d) between itself and the closest
+ve and -ve datapoints.

Margin/gap
The hyperplane parallel to π, that touches the closest +ve point:
+ 𝑇
π : 𝑤 𝑥 + 𝑏 = 1;
The hyperplane parallel to π, that touches the closest -ve point:
− 𝑇
π :𝑤 𝑥 + 𝑏 = − 1
+ − 2
Margin is measured as the distance between them: 𝑑(π , π ) = ||𝑤||
;
- where w is the weight of the model.
The optimization problem in SVM
2
Optimization problem: 𝑚𝑎𝑥 ||𝑤||

The goal is to maximize generalization.

Support Vectors
These are the data points:-
- Are within the margin
- Or, are misclassified
+ −
- Or, which lies on the hyperplanes (π , π )
- ∝𝑖 = 0 for nonsupport vectors, whereas, ∝𝑖 > 0 for support vectors.

Different Types of SVM model


A. Hard Margin SVM:
- The simplest form of the SVM model.
- It assumes no data point can lie inside the Margin.
- Rarely works in real-life problems.

B. Soft Margin SVM:


- Introduces ζ as error for an incorrectly placed data point.
- ζ = 0 ; datapoint is placed such that the hyperplane
classifies the point correctly
- ζ > 1 ; datapoint is placed such that the hyperplane
classifies the point incorrectly
- ζ < 1 ; datapoint is placed such that the hyperplane still
manages to classify the point correctly, but lies inside the
margin.
- Linear soft margin SVM is similar to LogReg

You might also like