0% found this document useful (0 votes)
11 views3 pages

AAM 5th Practicle

The document provides a Python code implementation of a Decision Tree Classifier using the Iris dataset for classification. It includes data loading, training, evaluation, and visualization of the decision tree, achieving an accuracy of 1.0. The code also demonstrates how to predict the class of a new data sample.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views3 pages

AAM 5th Practicle

The document provides a Python code implementation of a Decision Tree Classifier using the Iris dataset for classification. It includes data loading, training, evaluation, and visualization of the decision tree, achieving an accuracy of 1.0. The code also demonstrates how to predict the class of a new data sample.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

5.

Write a python Programming code to implement decision tree for classification using suitable
data/dataset.

Python Code: Decision Tree Classification


# Importing necessary libraries
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, export_text
from sklearn.metrics import accuracy_score, classification_report
import matplotlib.pyplot as plt
from sklearn.tree import plot_tree

# Load the Iris dataset


iris = load_iris()
X = iris.data # Features: Sepal Length, Sepal Width, Petal Length, Petal
Width
y = iris.target # Target: Iris species (Setosa, Versicolor, Virginica)

# Splitting the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=42)

# Initialize the Decision Tree Classifier


dt_classifier = DecisionTreeClassifier(criterion='gini', max_depth=3,
random_state=42)

# Train the model


dt_classifier.fit(X_train, y_train)

# Make predictions on the test set


y_pred = dt_classifier.predict(X_test)

# Evaluate the model


accuracy = accuracy_score(y_test, y_pred)
print("Accuracy of Decision Tree Classifier:", accuracy)
print("\nClassification Report:\n", classification_report(y_test, y_pred))

# Display the structure of the decision tree


tree_rules = export_text(dt_classifier, feature_names=iris.feature_names)
print("\nDecision Tree Rules:\n", tree_rules)

# Visualize the Decision Tree


plt.figure(figsize=(12, 8))
plot_tree(dt_classifier, feature_names=iris.feature_names,
class_names=iris.target_names, filled=True)
plt.title("Decision Tree Visualization")
plt.show()

# Example prediction for a new data point


sample_data = [[5.1, 3.5, 1.4, 0.2]] # Example input
predicted_class = dt_classifier.predict(sample_data)
print("Predicted class for sample data:",
iris.target_names[predicted_class[0]])
Accuracy of Decision Tree Classifier: 1.0

Classification Report:

precision recall f1-score support

0 1.00 1.00 1.00 19

1 1.00 1.00 1.00 13

2 1.00 1.00 1.00 13

accuracy 1.00 45

macro avg 1.00 1.00 1.00 45

weighted avg 1.00 1.00 1.00 45

Decision Tree Rules:

|--- petal length (cm) <= 2.45

| |--- class: 0

|--- petal length (cm) > 2.45

| |--- petal length (cm) <= 4.75

| | |--- petal width (cm) <= 1.60

| | | |--- class: 1

| | |--- petal width (cm) > 1.60

| | | |--- class: 2

| |--- petal length (cm) > 4.75

| | |--- petal width (cm) <= 1.75

| | | |--- class: 1

| | |--- petal width (cm) > 1.75

| | | |--- class: 2
Predicted class for sample data: setosa Explanation of Code

1. Dataset:
 The Iris dataset contains four features ( sepal length, sepal width, petal
length, petal width) and three classes (Setosa, Versicolor, Virginica).
2. Data Splitting:
 The dataset is divided into training (70%) and testing (30%) subsets.
3. Decision Tree Classifier:
 criterion='gini': Specifies Gini impurity as the splitting criterion.
 max_depth=3: Limits the depth of the tree to avoid overfitting.
4. Evaluation:
 The model is evaluated using accuracy and a classification report.
5. Visualization:
 plot_tree: Visualizes the tree structure.
 export_text: Prints the textual rules of the decision tree.
6. Prediction:
 Predicts the class of a new data sample based on learned patterns.

You might also like