0% found this document useful (0 votes)
6 views3 pages

Experiment 4

The document contains Python code that implements a Support Vector Machine (SVM) classifier using the Iris dataset for binary classification of classes 0 and 1. It standardizes the features, splits the dataset into training and testing sets, and visualizes the decision boundaries for different kernel options (linear, polynomial, and radial basis function). The code includes a function to plot the decision boundaries based on the trained SVM model.

Uploaded by

Rishab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views3 pages

Experiment 4

The document contains Python code that implements a Support Vector Machine (SVM) classifier using the Iris dataset for binary classification of classes 0 and 1. It standardizes the features, splits the dataset into training and testing sets, and visualizes the decision boundaries for different kernel options (linear, polynomial, and radial basis function). The code includes a function to plot the decision boundaries based on the trained SVM model.

Uploaded by

Rishab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

import numpy as np

import matplotlib.pyplot as plt


from sklearn import datasets
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split

# Load dataset
iris = datasets.load_iris()
X = iris.data[:, :2] # Take only the first two features for 2D
visualization
y = iris.target

# For simplicity, let's binary classify only class 0 and 1


X = X[y != 2]
y = y[y != 2]

# Standardize the features


scaler = StandardScaler()
X = scaler.fit_transform(X)

# Split dataset
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.3, random_state=42)

# Kernel options
kernels = ['linear', 'poly', 'rbf']

# Plotting function
def plot_decision_boundary(clf, X, y, title):
h = .02 # step size in the mesh
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)

plt.figure(figsize=(6, 4))
plt.contourf(xx, yy, Z, alpha=0.3)
plt.scatter(X[:, 0], X[:, 1], c=y, s=30, edgecolors='k')
plt.title(f"SVM with {title} kernel")
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.show()

# Train and visualize for each kernel


for kernel in kernels:
if kernel == 'poly':
clf = SVC(kernel=kernel, degree=3, C=1.0)
else:
clf= SVC(kernel=kernel, C=1.0)

clf.fit(X_train, y_train)
plot_decision_boundary(clf, X, y, kernel)

You might also like