0% found this document useful (0 votes)
771 views5 pages

TASK 8: Deploy Support Vector Machine, Apriori Algorithm: BTCS619-18

The document outlines a task to deploy Support Vector Machine (SVM) and Apriori algorithm using the Iris dataset. It details the process of reading the data, defining features and target variables, splitting the data into training and testing sets, and training SVM models with different kernels (linear, polynomial, RBF, and sigmoid). The classification reports and confusion matrices for each model are provided, along with a bar graph representing the accuracy rates of the models.

Uploaded by

SAHIL BADHAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
771 views5 pages

TASK 8: Deploy Support Vector Machine, Apriori Algorithm: BTCS619-18

The document outlines a task to deploy Support Vector Machine (SVM) and Apriori algorithm using the Iris dataset. It details the process of reading the data, defining features and target variables, splitting the data into training and testing sets, and training SVM models with different kernels (linear, polynomial, RBF, and sigmoid). The classification reports and confusion matrices for each model are provided, along with a bar graph representing the accuracy rates of the models.

Uploaded by

SAHIL BADHAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

BTCS619-18 2232884

TASK 8: Deploy Support Vector Machine,


Apriori algorithm
In [3]: import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

Read CSV

In [4]: df =pd.read_csv('iris.csv')
df.head()

Out[4]: Id SepalLengthCm SepalWidthCm PetalLengthCm PetalWidthCm Species

0 1 5.1 3.5 1.4 0.2 Iris-setosa

1 2 4.9 3.0 1.4 0.2 Iris-setosa

2 3 4.7 3.2 1.3 0.2 Iris-setosa

3 4 4.6 3.1 1.5 0.2 Iris-setosa

4 5 5.0 3.6 1.4 0.2 Iris-setosa

In [5]: df.dtypes

Out[5]: Id int64
SepalLengthCm float64
SepalWidthCm float64
PetalLengthCm float64
PetalWidthCm float64
Species object
dtype: object

In [6]: df.columns

Out[6]: Index(['Id', 'SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm',


'Species'],
dtype='object')

Defining Features and Target Variable

In [7]: x=df[['SepalLengthCm','SepalWidthCm','PetalLengthCm', 'PetalWidthCm']]


y=df['Species']

Splitting Data into Training and Testing Sets

In [8]: from sklearn.model_selection import train_test_split


2232884

x_train, x_test, y_train,y_test=train_test_split(x,y, test_size=0.2)

Training the SVM Model Using Linear Kernel

In [9]: from sklearn.svm import SVC


svclassifier=SVC(kernel='linear')
svclassifier.fit(x_train,y_train)

Out[9]: ▾ SVC i ?

SVC(kernel='linear')

In [10]: y_pred =svclassifier.predict(x_test)

Classification Report

In [11]: from sklearn.metrics import classification_report,confusion_matrix


print(confusion_matrix(y_test,y_pred))
print(classification_report(y_test, y_pred))

[[ 9 0 0]
[ 0 11 2]
[ 0 0 8]]
precision recall f1-score support

Iris-setosa 1.00 1.00 1.00 9


Iris-versicolor 1.00 0.85 0.92 13
Iris-virginica 0.80 1.00 0.89 8

accuracy 0.93 30
macro avg 0.93 0.95 0.94 30
weighted avg 0.95 0.93 0.93 30

Train Model Using Poly Kernel

In [12]: from sklearn.svm import SVC


svclassifier=SVC(kernel='poly',degree=8)
svclassifier.fit(x_train,y_train)

Out[12]: ▾ SVC i ?

SVC(degree=8, kernel='poly')

In [13]: y_pred=svclassifier.predict(x_test)

Classification Report

In [14]: from sklearn.metrics import classification_report,confusion_matrix


print(confusion_matrix(y_test,y_pred))
print(classification_report(y_test, y_pred))
2232884

[[ 9 0 0]
[ 0 11 2]
[ 0 0 8]]
precision recall f1-score support

Iris-setosa 1.00 1.00 1.00 9


Iris-versicolor 1.00 0.85 0.92 13
Iris-virginica 0.80 1.00 0.89 8

accuracy 0.93 30
macro avg 0.93 0.95 0.94 30
weighted avg 0.95 0.93 0.93 30

Train Model Using RBF Kernel

In [15]: from sklearn.svm import SVC


svclassifier=SVC(kernel='rbf')
svclassifier.fit(x_train,y_train)

Out[15]: ▾ SVC i ?

SVC()

In [16]: y_pred=svclassifier.predict(x_test)

Classification Report

In [17]: from sklearn.metrics import classification_report,confusion_matrix


print(confusion_matrix(y_test,y_pred))
print(classification_report(y_test, y_pred))

[[ 9 0 0]
[ 0 11 2]
[ 0 0 8]]
precision recall f1-score support

Iris-setosa 1.00 1.00 1.00 9


Iris-versicolor 1.00 0.85 0.92 13
Iris-virginica 0.80 1.00 0.89 8

accuracy 0.93 30
macro avg 0.93 0.95 0.94 30
weighted avg 0.95 0.93 0.93 30

Train Model Using Sigmoid Kernel

In [18]: from sklearn.svm import SVC


svclassifier=SVC(kernel='sigmoid')
svclassifier.fit(x_train,y_train)

Out[18]: ▾ SVC i ?

SVC(kernel='sigmoid')
2232884

In [19]: y_pred=svclassifier.predict(x_test)

Classification Report

In [20]: from sklearn.metrics import classification_report,confusion_matrix


print(confusion_matrix(y_test,y_pred))
print(classification_report(y_test, y_pred))

[[ 0 0 9]
[ 0 0 13]
[ 0 0 8]]
precision recall f1-score support

Iris-setosa 0.00 0.00 0.00 9


Iris-versicolor 0.00 0.00 0.00 13
Iris-virginica 0.27 1.00 0.42 8

accuracy 0.27 30
macro avg 0.09 0.33 0.14 30
weighted avg 0.07 0.27 0.11 30

C:\Users\jaska\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:153
1: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in label
s with no predicted samples. Use `zero_division` parameter to control this behavi
or.
_warn_prf(average, modifier, f"{metric.capitalize()} is", len(result))
C:\Users\jaska\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:153
1: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in label
s with no predicted samples. Use `zero_division` parameter to control this behavi
or.
_warn_prf(average, modifier, f"{metric.capitalize()} is", len(result))
C:\Users\jaska\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:153
1: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in label
s with no predicted samples. Use `zero_division` parameter to control this behavi
or.
_warn_prf(average, modifier, f"{metric.capitalize()} is", len(result))

Bar Graph

In [21]: import matplotlib.pyplot as plt


categories = ['linear', 'ploy', 'rbf', 'sigmoid']
values = [93, 90, 97,20 ]
colors = ['red', 'blue', 'green', 'purple']
plt.bar(categories, values, color=colors)
plt.title('Sample Bar Graph')
plt.xlabel('Categories')
plt.ylabel('Accuacy rate')
plt.show()

You might also like