0% found this document useful (0 votes)
18 views9 pages

Introduction To Support Vector Machines SVM

Support Vector Machines (SVMs) are supervised learning models used for classification and regression, developed in the 1990s. They work by maximizing the margin between classes using hyperplanes and support vectors, with applications in image classification, text categorization, and bioinformatics. While SVMs are effective in high dimensions and memory efficient, they can overfit and are not suitable for large datasets.

Uploaded by

Pon Saravanan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views9 pages

Introduction To Support Vector Machines SVM

Support Vector Machines (SVMs) are supervised learning models used for classification and regression, developed in the 1990s. They work by maximizing the margin between classes using hyperplanes and support vectors, with applications in image classification, text categorization, and bioinformatics. While SVMs are effective in high dimensions and memory efficient, they can overfit and are not suitable for large datasets.

Uploaded by

Pon Saravanan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Introduction to Support

Vector Machines (SVM)


Support Vector Machines (SVMs) are supervised learning models. They
perform classification and regression tasks. SVMs were developed at
Bell Labs in the 1990s by Vapnik et al. The core idea is maximizing the
margin between classes.

SVMs find the optimal hyperplane. They are used in image classification,
text categorization, and bioinformatics.
Hyperplanes and Margins
Hyperplane Margin
A hyperplane is a decision boundary. It separates data The margin is the distance. It is between the hyperplane
points of different classes. and the closest data points, called support vectors.

The goal is to maximize the margin. This improves generalization and robustness.
Support Vectors: The Key to
SVM
Definition Importance
Support vectors are data SVM uses support vectors
points. They lie closest to only. They define the
the hyperplane. decision boundary.

Robustness
SVM is insensitive. It is to data points far from the margin.
Linear SVM: Math Behind
the Margin
Objective Function Constraints
Maximize margin width (2 / Ensure correct classification.
||w||). w is the weight vector. y_i(w^T x_i + b) >= 1.

Optimization
Use quadratic programming. Find optimal w and b.
Non-Linear SVM: The Kernel
Trick

Problem Solution Kernel Functions


Linear SVM fails with Map data to higher Polynomial, RBF,
non-linear data. dimensions. Use a Sigmoid are options.
kernel function.
SVM in Practice:
Applications
1 Image Classification
Identify objects in images. Example: MNIST dataset.

2 Text Categorization
Classify documents. Example: Sentiment analysis.

3 Bioinformatics
Predict protein function. Use gene expression data.
Advantages and Disadvantages of SVM
1 2

Advantages Disadvantages
Effective in high dimensions. Overfitting is possible.
Memory efficient. Not for large datasets.
Versatile Kernel functions. Tuning is challenging.
Conclusion

1 Powerful algorithm

2 Key concepts

3 Wide applications

SVM is versatile for classification and regression. It uses hyperplanes, margins, support vectors, and the kernel trick.
Consider parameter tuning and data preprocessing.

You might also like