0% found this document useful (0 votes)
12 views

Lab Program (SVM From Scratch)

SVM from Scratch and SVM using sklearn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Lab Program (SVM From Scratch)

SVM from Scratch and SVM using sklearn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

SVM from Scratch

Algorithm Explanation

1. Objective: Find a hyperplane that separates classes with the maximum margin. This
hyperplane can be represented by a weight vector w and bias b.
2. Loss Function: We use a hinge loss function and apply gradient descent to minimize the loss.
3. Gradient Descent: We iteratively update the weights and bias based on the gradients of the
loss function.

python
Copy code
import numpy as np
import pandas as pd

class SVM:
def __init__(self, learning_rate=0.001, lambda_param=0.01,
n_iters=1000):
self.lr = learning_rate
self.lambda_param = lambda_param
self.n_iters = n_iters
self.w = None
self.b = None

def fit(self, X, y):


"""Train SVM using gradient descent"""
n_samples, n_features = X.shape
y_ = np.where(y <= 0, -1, 1) # Convert labels to -1, 1

self.w = np.zeros(n_features)
self.b = 0

for _ in range(self.n_iters):
for idx, x_i in enumerate(X):
condition = y_[idx] * (np.dot(x_i, self.w) - self.b) >= 1
if condition:
self.w -= self.lr * (2 * self.lambda_param * self.w)
else:
self.w -= self.lr * (2 * self.lambda_param * self.w -
np.dot(x_i, y_[idx]))
self.b -= self.lr * y_[idx]

def predict(self, X):


"""Predict the class of input data"""
approx = np.dot(X, self.w) - self.b
return np.sign(approx)

# Load CSV file


data = pd.read_csv('data.csv')
X = data[['Feature1', 'Feature2']].values
y = data['Label'].values # Labels should be -1 and 1 for binary
classification

# Train SVM from scratch


svm = SVM(learning_rate=0.001, lambda_param=0.01, n_iters=1000)
svm.fit(X, y)
predictions = svm.predict(X)

# Print the predicted labels


print('Predictions:', predictions)

Explanation:

• Loss Function: We apply hinge loss to penalize misclassifications and maximize the margin
between classes.
• Weight Update: We use gradient descent to adjust the weight vector w and bias b after each
iteration.
• Prediction: The class prediction is based on the sign of the dot product of w and input data X.

SVM Using sklearn

Using sklearn, SVM can be applied easily with minimal code.

python
Copy code
from sklearn import svm
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import pandas as pd

# Load CSV file


data = pd.read_csv('data.csv')
X = data[['Feature1', 'Feature2']].values
y = data['Label'].values # Make sure labels are -1 and 1 for binary
classification

# Split the data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

# Apply SVM using sklearn


clf = svm.SVC(kernel='linear') # Linear kernel for binary classification
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)

# Check accuracy
accuracy = accuracy_score(y_test, predictions)
print('Accuracy:', accuracy)

Explanation:

• Kernel: The linear kernel is used for basic linear classification. For more complex data,
other kernels like rbf (Radial Basis Function) can be used.
• Train-Test Split: The dataset is split into training and testing sets to evaluate the performance
of the model.
• Accuracy: The accuracy is calculated to assess the classifier's performance.

You might also like