0% found this document useful (0 votes)
16 views5 pages

SVM Fully Translated Fixed

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views5 pages

SVM Fully Translated Fixed

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

SVM

DEFINITION

SVM (Support Vector Machine) is a supervised learning algorithm that can be used for classification

or regression problems.

This algorithm helps find an optimal hyperplane to linearly separate data into two different classes.

SVM is a suitable method for classification problems with large feature spaces where the objects to

be classified are

represented by a large set of features.

In a classification problem, the data will be divided into two classes: a positive label class and a

negative label class.

1. Positive Class:

Data points in this class are those identified by the SVM algorithm as belonging to a specific group

or set in the problem.

Example: In an email classification problem, the positive class could represent spam emails.

2. Negative Class:

Data points in this class are the opposite of the positive class, meaning they are in the opposing

group.

Example: In the email classification problem, the negative class could represent non-spam (regular)

emails.
PRINCIPLE OF SVM

(image illustrating how Support Vector Machines (SVM) work in a two-class classification problem,

with two features)

Explanation of the basic principle of SVM using an image

Data groups:

- Yellow triangles: Represent data belonging to one class (possibly the negative class).

- Blue diamonds: Represent data belonging to the other class (possibly the positive class).

Hyperplanes:

- Positive Hyperplane: This is the line parallel to the optimal hyperplane and closest to the points of

the positive class.

It limits the distance from the positive points to the optimal hyperplane.

- Negative Hyperplane: This is the line parallel to the optimal hyperplane and closest to the points of

the negative class.

It limits the distance from the negative points to the optimal hyperplane.

Optimal Hyperplane:
This is the central line that divides the data into two groups such that the distance between this

hyperplane and the

data points of both groups is maximized. The goal of the SVM algorithm is to find the hyperplane

with the maximum margin

between the two data classes.

Maximum Margin:

Maximum Margin is the distance between the positive and negative hyperplanes. The larger this

distance, the more accurate

the SVM model will be in classifying new data.

Support Vectors:

Support Vectors are the data points closest to the optimal hyperplane, and these points determine

the position of the

hyperplane. In the image, the points on the positive and negative hyperplanes are called Support

Vectors. SVM uses these

points to define and adjust the optimal hyperplane.

KERNEL FUNCTION

In cases where data cannot be separated by a hyperplane, kernel functions are used to transform

the data into a higher-dimensional space.

There are 4 common types of kernels used in SVM:


Linear Kernel: Suitable for linear data.

Polynomial Kernel: Deals with nonlinear data with polynomial properties.

Gaussian (RBF) Kernel: Suitable for complex nonlinear data.

Sigmoid Kernel: Commonly used in neural networks and nonlinear classification.

ADVANTAGES AND DISADVANTAGES

Advantages of SVM:

Efficient in high-dimensional spaces: Particularly useful for problems with large numbers of

dimensions, such as text classification

and sentiment analysis.

Memory-efficient: Only stores important points (support vectors) necessary for training and

classification.

Flexible with Kernels: SVM can handle both linear and nonlinear problems through the use of

different kernels.

Disadvantages of SVM:

Works only with real number input data.


SVM performs classification for two classes.

Does not provide probability: SVM only classifies data without providing a probability score.

CONCLUSION

SVM is a powerful classification method, particularly useful in image processing, text classification,

and sentiment analysis

thanks to its flexible use of kernel functions. However, it has limitations when dealing with very

high-dimensional data and

does not provide classification probabilities.

You might also like