0% found this document useful (0 votes)
17 views4 pages

Practical 6A & 6B

The document demonstrates using decision trees and support vector machines to classify a dataset. It loads and prepares the data, trains decision tree and SVM models on the data, makes predictions on test data, and calculates the accuracy of the predictions.

Uploaded by

Rohit Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views4 pages

Practical 6A & 6B

The document demonstrates using decision trees and support vector machines to classify a dataset. It loads and prepares the data, trains decision tree and SVM models on the data, makes predictions on test data, and calculates the accuracy of the predictions.

Uploaded by

Rohit Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Practical No.

6A
Aim: Demonstrate Decision Tree and display the accuracy score
Code:
####################### Decision Tree #########################
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn import tree
balance_data = pd.read_csv('C:/Users/Rohit/Sem V/AI practicals/balance-scale.data', sep=',' ,
header= None)
print("Dataset Length::", len(balance_data))
print("Datatset Shape:: ", balance_data.shape)
print(balance_data.head())

X=balance_data.values[:, 1:5]
Y=balance_data.values[:,0]

X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.4, random_state = 100)

clf_gini = DecisionTreeClassifier(criterion = "gini", random_state = 100, max_depth=3 ,


min_samples_leaf = 5)
clf_gini.fit(X_train, y_train)

clf_entropy = DecisionTreeClassifier(criterion = "entropy", random_state = 100, max_depth=3 ,


min_samples_leaf = 5)
clf_entropy.fit(X_train, y_train)
y_pred_en = clf_entropy.predict(X_test)
y_pred_gini = clf_gini.predict(X_test)

print(y_pred_en)
print(y_pred_gini)

#print(y_test)

accuracy_score(y_pred_en, y_test)*100

Output:
Practical No.6B
Aim: Demonstrate Support Vector Machine and display the accuracy score
Code:
##################### SVM Regression ###################
import numpy as np
import pandas as pd
from sklearn import datasets
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
balance_data = pd.read_csv('C:/Users/Rohit/Sem V/AI practicals/balance-
scale.data', sep=',' , header= None)
print("Dataset Length::", len(balance_data))
print("Datatset Shape:: ", balance_data.shape)
print(balance_data.head())

X=balance_data.values[:, 1:5]
Y=balance_data.values[:,0]

X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.3,


random_state = 100)

svclassifier = SVC(kernel = 'linear')


svclassifier.fit(X_train, y_train)
print(y_test)

y_pred = svclassifier.predict(X_test)

print(y_pred)

print(accuracy_score(y_pred,y_test)*100)

Output:

You might also like