0% found this document useful (0 votes)
27 views2 pages

Flower Category Analysis - Ipynb - Colab

Uploaded by

Moonlight
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views2 pages

Flower Category Analysis - Ipynb - Colab

Uploaded by

Moonlight
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

9/24/24, 3:54 PM (MORENO)Flower Category Analysis.

ipynb - Colab

# import the libraries


import numpy as np
import matplotlib.pyplot as plt

# import the dataset


from sklearn.datasets import load_iris
data = load_iris()
# (150 samples, seapl length, seapl width, petal length, petal width)
x = data.data
y = data.target

# split the dataset (training, test)


from sklearn.model_selection import train_test_split
train_X, test_X, train_Y, test_Y = train_test_split(x, y)

x.shape

(150, 4)

train_X.shape

(112, 4)

# perform modeling
# Logistic Regression, SVM, Decision Tree
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn import svm
from sklearn.tree import DecisionTreeClassifier

model = LogisticRegression()
model.fit(train_X, train_Y)
print('Logistic Regression:', model.score(test_X, test_Y))

Logistic Regression: 0.9473684210526315

model_1 = svm.SVC()
model_1.fit(train_X, train_Y)
print('SVM:', model_1.score(test_X, test_Y))

SVM: 0.9210526315789473

model_2 = DecisionTreeClassifier()
model_2.fit(train_X, train_Y)
print('Decision Tree:', model_2.score(test_X, test_Y))

Decision Tree: 0.9210526315789473

model_3 = KNeighborsClassifier(n_neighbors=3)
model_3.fit(train_X, train_Y)
print('KNN: ', model_2.score(test_X, test_Y))

KNN: 0.9210526315789473

# how to use for loop to find the optimal number of neighbors


t = []
for i in range(1, 11):
model_4 = KNeighborsClassifier(n_neighbors=i)
model_4.fit(train_X, train_Y)
print ('neighbors:{}, accuracy:{}'.format(i, model_4.score(test_X, test_Y)))
t.append(model_4.score(test_X, test_Y))
plt.plot([i for i in range(1,11)], t)

https://colab.research.google.com/drive/1Cb8G6GPfW2F5Ga6SL2p5jTfGq3ydluRM?authuser=1#scrollTo=sqJ3Yn1NBhPt&printMode=true 1/2
9/24/24, 3:54 PM (MORENO)Flower Category Analysis.ipynb - Colab

neighbors:1, accuracy:0.9210526315789473
neighbors:2, accuracy:0.9210526315789473
neighbors:3, accuracy:0.9210526315789473
neighbors:4, accuracy:0.9210526315789473
neighbors:5, accuracy:0.8947368421052632
neighbors:6, accuracy:0.9210526315789473
neighbors:7, accuracy:0.8947368421052632
neighbors:8, accuracy:0.9210526315789473
neighbors:9, accuracy:0.9210526315789473
neighbors:10, accuracy:0.9210526315789473
[<matplotlib.lines.Line2D at 0x79a255039300>]

Suggested code may be subject to a license | utjacksi/Feel-The-Bernoulli


# find the effect after data preprocessing
from sklearn.preprocessing import StandardScaler
std = StandardScaler()
train_X_std =std.fit_transform(train_X)
test_X_std = std.transform(test_X)
print('after', train_X_std.std(axis=0), train_X_std.mean(axis=0))
print('before', train_X.std(axis=0), train_X.mean(axis=0))

after [1. 1. 1. 1.] [ 3.63300659e-15 -1.78626954e-15 8.87187149e-17 3.60326848e-16]


before [0.7916129 0.45200446 1.76559289 0.75629954] [5.76160714 3.0875 3.52321429 1.10803571]

model_1 = svm.SVC()
model_1.fit(train_X_std, train_Y)
print('SVM:', model_1.score(test_X_std, test_Y))

SVM: 0.9210526315789473

add Code add Text


Start coding or generate with AI.

https://colab.research.google.com/drive/1Cb8G6GPfW2F5Ga6SL2p5jTfGq3ydluRM?authuser=1#scrollTo=sqJ3Yn1NBhPt&printMode=true 2/2

You might also like