0% found this document useful (0 votes)
10 views15 pages

ML 6

The document provides an instructional sheet for a practical experiment on implementing a Decision Tree algorithm as part of a Machine Learning course at Hi-Tech Institute of Technology, Aurangabad. It outlines the hardware and software requirements, describes the Decision Tree algorithm, and includes detailed steps for data import, preprocessing, model training, and accuracy calculation using Python packages like sklearn, NumPy, and Pandas. The document also includes source code for implementing the experiment and expected outputs.

Uploaded by

softdevloper112
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views15 pages

ML 6

The document provides an instructional sheet for a practical experiment on implementing a Decision Tree algorithm as part of a Machine Learning course at Hi-Tech Institute of Technology, Aurangabad. It outlines the hardware and software requirements, describes the Decision Tree algorithm, and includes detailed steps for data import, preprocessing, model training, and accuracy calculation using Python packages like sklearn, NumPy, and Pandas. The document also includes source code for implementing the experiment and expected outputs.

Uploaded by

softdevloper112
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Since 2001

Bhartiya Gramin Punarrachna Sanstha’s


Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

DEPARTMENT: COMPUTER SCIENCE & ENGINEERING DEPARTMENT

PRACTICAL EXPERIMENT INSTRUCTION SHEET

EXPERIMENT TITLE: IMPLEMENTING DECISION TREE.

EXPERIMENT NO.:06 SUBJECT: Machine Learning


CLASS:TE CSE SEMESTER: VI
Aim: TO STUDY AND IMPLIMENTAION OF DECISION.
Hardware Requirement:

Intel(R) Core(TM) i5-10505 CPU @ 3.20 GHz Processor, 8 GB RAM, 256 GB HDD, 20” LCD
Monitor, Keyboard, Mouse.
Software Requirement:
OS ,Python 2.7, PyCharm and Anaconda. Studio.
DESCRIPTION:
Decision Tree is one of the most powerful and popular algorithm. Decision-tree algorithm falls
under the category of supervised learning algorithms. It works for both continuous as well as
categorical output variables.

Used Python Packages:


1. sklearn :
 In python, sklearn is a machine learning package which include a lot of ML algorithms.
 Here, we are using some of its modules like train_test_split, DecisionTreeClassifier and
accuracy_score.
2. NumPy :
Since 2001
Bhartiya Gramin Punarrachna Sanstha’s
Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

 It is a numeric python module which provides fast maths functions for calculations.
 It is used to read data in numpy arrays and for manipulation purpose.
3. Pandas :
 Used to read and write different files.
 Data manipulation can be done easily with dataframes.
Installation of the packages :
In Python, sklearn is the package which contains all the required packages to implement
Machine learning algorithm. You can install the sklearn package by following the commands
given below.
using pip :
pip install -U scikit-learn
Before using the above command make sure you have scipy and numpy packages installed.
If you don’t have pip. You can install it using
python get-pip.py
using conda :
conda install scikit-learn
Assumptions we make while using Decision tree :
 At the beginning, we consider the whole training set as the root.
 Attributes are assumed to be categorical for information gain and for gini index, attributes
are assumed to be continuous.
 On the basis of attribute values records are distributed recursively.
 We use statistical methods for ordering attributes as root or internal node.
Pseudocode :
1. Find the best attribute and place it on the root node of the tree.
2. Now, split the training set of the dataset into subsets. While making the subset make sure
that each subset of training dataset should have the same value for an attribute.
3. Find leaf nodes in all branches by repeating 1 and 2 on each subset.
Since 2001
Bhartiya Gramin Punarrachna Sanstha’s
Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

While implementing the decision tree we will go through the following two phases:
1. Building Phase
 Preprocess the dataset.
 Split the dataset from train and test using Python sklearn package.
 Train the classifier.
2. Operational Phase
 Make predictions.
 Calculate the accuracy.
Data Import :
 To import and manipulate the data we are using the pandas package provided in python.
 Here, we are using a URL which is directly fetching the dataset from the UCI site no need
to download the dataset. When you try to run this code on your system make sure the
system should have an active Internet connection.
 As the dataset is separated by “,” so we have to pass the sep parameter’s value as “,”.
 Another thing is notice is that the dataset doesn’t contain the header so we will pass the
Header parameter’s value as none. If we will not pass the header parameter then it will
consider the first line of the dataset as the header.
Data Slicing :
 Before training the model we have to split the dataset into the training and testing dataset.
 To split the dataset for training and testing we are using the sklearn module train_test_split
Since 2001
Bhartiya Gramin Punarrachna Sanstha’s
Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

 First of all we have to separate the target variable from the attributes in the dataset.
X = balance_data.values[:, 1:5]
Y = balance_data.values[:,0]
 Above are the lines from the code which separate the dataset. The variable X contains the
attributes while the variable Y contains the target variable of the dataset.
 Next step is to split the dataset for training and testing purpose.
X_train, X_test, y_train, y_test = train_test_split(
X, Y, test_size = 0.3, random_state = 100)
 Above line split the dataset for training and testing. As we are splitting the dataset in a ratio
of 70:30 between training and testing so we are pass test_size parameter’s value as 0.3.
 random_state variable is a pseudo-random number generator state used for random
sampling.
Terms used in code :
Gini index and information gain both of these methods are used to select from the n attributes
of the dataset which attribute would be placed at the root node or the internal node.
Gini index:

 Gini Index is a metric to measure how often a randomly chosen element would be
incorrectly identified.
 It means an attribute with lower gini index should be preferred.
 Sklearn supports “gini” criteria for Gini Index and by default, it takes “gini” value.

SOURCE CODE:
# Run this program on your local python
# interpreter, provided you have installed
# the required libraries.

# Importing the required packages


import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix
Since 2001
Bhartiya Gramin Punarrachna Sanstha’s
Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

from sklearn.model_selection import train_test_split


from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report

# Function importing Dataset


def importdata():
balance_data = pd.read_csv(
'https://archive.ics.uci.edu/ml/machine-learning-'+
'databases/balance-scale/balance-scale.data',
sep= ',', header = None)

# Printing the dataswet shape


print ("Dataset Length: ", len(balance_data))
print ("Dataset Shape: ", balance_data.shape)

# Printing the dataset obseravtions


print ("Dataset: ",balance_data.head())
return balance_data

# Function to split the dataset


def splitdataset(balance_data):

# Separating the target variable


X = balance_data.values[:, 1:5]
Since 2001
Bhartiya Gramin Punarrachna Sanstha’s
Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

Y = balance_data.values[:, 0]

# Splitting the dataset into train and test


X_train, X_test, y_train, y_test = train_test_split(
X, Y, test_size = 0.3, random_state = 100)

return X, Y, X_train, X_test, y_train, y_test

# Function to perform training with giniIndex.


def train_using_gini(X_train, X_test, y_train):

# Creating the classifier object


clf_gini = DecisionTreeClassifier(criterion = "gini",
random_state = 100,max_depth=3, min_samples_leaf=5)

# Performing training
clf_gini.fit(X_train, y_train)
return clf_gini

# Function to perform training with entropy.


def tarin_using_entropy(X_train, X_test, y_train):

# Decision tree with entropy


clf_entropy = DecisionTreeClassifier(
criterion = "entropy", random_state = 100,
Since 2001
Bhartiya Gramin Punarrachna Sanstha’s
Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

max_depth = 3, min_samples_leaf = 5)

# Performing training
clf_entropy.fit(X_train, y_train)
return clf_entropy

# Function to make predictions


def prediction(X_test, clf_object):

# Predicton on test with giniIndex


y_pred = clf_object.predict(X_test)
print("Predicted values:")
print(y_pred)
return y_pred

# Function to calculate accuracy


def cal_accuracy(y_test, y_pred):

print("Confusion Matrix: ",


confusion_matrix(y_test, y_pred))

print ("Accuracy : ",


accuracy_score(y_test,y_pred)*100)
Since 2001
Bhartiya Gramin Punarrachna Sanstha’s
Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

print("Report : ",
classification_report(y_test, y_pred))

# Driver code
def main():

# Building Phase
data = importdata()
X, Y, X_train, X_test, y_train, y_test = splitdataset(data)
clf_gini = train_using_gini(X_train, X_test, y_train)
clf_entropy = tarin_using_entropy(X_train, X_test, y_train)

# Operational Phase
print("Results Using Gini Index:")

# Prediction using gini


y_pred_gini = prediction(X_test, clf_gini)
cal_accuracy(y_test, y_pred_gini)

print("Results Using Entropy:")


# Prediction using entropy
y_pred_entropy = prediction(X_test, clf_entropy)
cal_accuracy(y_test, y_pred_entropy)
Since 2001
Bhartiya Gramin Punarrachna Sanstha’s
Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

# Calling main function


if __name__=="__main__":
main()

OUTPUT:
Data Information:

Dataset Length: 625


Dataset Shape: (625, 5)
Dataset: 0 1 2 3 4
0 B 1 1 1 1
1 R 1 1 1 2
2 R 1 1 1 3
3 R 1 1 1 4
4 R 1 1 1 5

Results Using Gini Index:

Predicted values:
['R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'L'
'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L'
'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'L' 'R'
'R' 'L' 'R' 'R' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'R'
'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L'
'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'R'
'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L'
'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R'
'L' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R'
'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R'
'L' 'R' 'R' 'L' 'L' 'R' 'R' 'R']

Confusion Matrix: [[ 0 6 7]
[ 0 67 18]
[ 0 19 71]]
Accuracy : 73.4042553191
Report :
precision recall f1-score support
Since 2001
Bhartiya Gramin Punarrachna Sanstha’s
Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

B 0.00 0.00 0.00 13


L 0.73 0.79 0.76 85
R 0.74 0.79 0.76 90
avg/total 0.68 0.73 0.71 188

Results Using Entropy:

Predicted values:
['R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L'
'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L'
'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L'
'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R'
'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L'
'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'R' 'R' 'R' 'R' 'R'
'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L'
'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R'
'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R'
'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'L' 'R'
'R' 'R' 'L' 'L' 'L' 'R' 'R' 'R']

Confusion Matrix: [[ 0 6 7]
[ 0 63 22]
[ 0 20 70]]
Accuracy : 70.7446808511
Report :
precision recall f1-score support
B 0.00 0.00 0.00 13
L 0.71 0.74 0.72 85
R 0.71 0.78 0.74 90
avg / total 0.66 0.71 0.68 188

STEPS RO PERFORM DECISITION TREE:


 Step 1: Import the required libraries.
# import numpy package for arrays and stuff
import numpy as np

# import matplotlib.pyplot for plotting our result


import matplotlib.pyplot as plt
Since 2001
Bhartiya Gramin Punarrachna Sanstha’s
Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

# import pandas for importing csv files


import pandas as pd
 Step 2: Initialize and print the Dataset.
# import dataset
# dataset = pd.read_csv('Data.csv')
# alternatively open up .csv file to read data

dataset = np.array(
[['Asset Flip', 100, 1000],
['Text Based', 500, 3000],
['Visual Novel', 1500, 5000],
['2D Pixel Art', 3500, 8000],
['2D Vector Art', 5000, 6500],
['Strategy', 6000, 7000],
['First Person Shooter', 8000, 15000],
['Simulator', 9500, 20000],
['Racing', 12000, 21000],
['RPG', 14000, 25000],
['Sandbox', 15500, 27000],
['Open-World', 16500, 30000],
['MMOFPS', 25000, 52000],
['MMORPG', 30000, 80000]
])

# print the dataset


print(dataset)
Output:
[['Asset Flip' '100' '1000']
['Text Based' '500' '3000']
['Visual Novel' '1500' '5000']
['2D Pixel Art' '3500' '8000']
['2D Vector Art' '5000' '6500']
['Strategy' '6000' '7000']
['First Person Shooter' '8000' '15000']
['Simulator' '9500' '20000']
['Racing' '12000' '21000']
['RPG' '14000' '25000']
Since 2001
Bhartiya Gramin Punarrachna Sanstha’s
Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

['Sandbox' '15500' '27000']


['Open-World' '16500' '30000']
['MMOFPS' '25000' '52000']
['MMORPG' '30000' '80000']]
 Step 3: Select all the rows and column 1 from the dataset to “X”.
# select all rows by : and column 1
# by 1:2 representing features
X = dataset[:, 1:2].astype(int)

# print X
print(X)
Output:
[[ 100]
[ 500]
[ 1500]
[ 3500]
[ 5000]
[ 6000]
[ 8000]
[ 9500]
[12000]
[14000]
[15500]
[16500]
[25000]
[30000]]
 Step 4: Select all of the rows and column 2 from the dataset to “y”.
# select all rows by : and column 2
# by 2 to Y representing labels
y = dataset[:, 2].astype(int)

# print y
Since 2001
Bhartiya Gramin Punarrachna Sanstha’s
Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

print(y)
Output:
[ 1000 3000 5000 8000 6500 7000 15000 20000 21000 25000 27000 30000 52000 80000]
 Step 5: Fit decision tree regressor to the dataset
# import the regressor
from sklearn.tree import DecisionTreeRegressor

# create a regressor object


regressor = DecisionTreeRegressor(random_state = 0)

# fit the regressor with X and Y data


regressor.fit(X, y)
Output:
DecisionTreeRegressor(ccp_alpha=0.0, criterion='mse', max_depth=None,
max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort='deprecated',
random_state=0, splitter='best')
 Step 6: Predicting a new value
# predicting a new value

# test the output by changing values, like 3750


y_pred = regressor.predict([[3750]])

# print the predicted price


print("Predicted price: % d\n"% y_pred)
Output:
Predicted price: 8000
 Step 7: Visualising the result
# arange for creating a range of values
# from min value of X to max value of X
# with a difference of 0.01 between two
# consecutive values
X_grid = np.arange(min(X), max(X), 0.01)

# reshape for reshaping the data into


Since 2001
Bhartiya Gramin Punarrachna Sanstha’s
Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

# a len(X_grid)*1 array, i.e. to make


# a column out of the X_grid values
X_grid = X_grid.reshape((len(X_grid), 1))

# scatter plot for original data


plt.scatter(X, y, color = 'red')

# plot predicted data


plt.plot(X_grid, regressor.predict(X_grid), color = 'blue')

# specify title
plt.title('Profit to Production Cost (Decision Tree Regression)')

# specify X axis label


plt.xlabel('Production Cost')

# specify Y axis label


plt.ylabel('Profit')

# show the plot


plt.show()

 Step 8: The tree is finally exported and shown in the TREE STRUCTURE below,
visualized using http://www.webgraphviz.com/ by copying the data from the ‘tree.dot’ file.
# import export_graphviz
from sklearn.tree import export_graphviz
Since 2001
Bhartiya Gramin Punarrachna Sanstha’s
Hi-Tech Institute of Technology, Aurangabad
A Pioneer to Shape Global Technocrats
Approved By AICTE, DTE Govt. of Maharashtra & Affiliated to Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
P-119, Bajajnagar, MIDC Waluj, Aurangabad, Maharashtra, India - 431136P: (0240) 2552240, 2553495, 2553496 Web:http://hitechengg.edu.in/

# export the decision tree to a tree.dot file


# for visualizing the plot easily anywhere
export_graphviz(regressor, out_file ='tree.dot',
feature_names =['Production Cost'])
OUTPUT:

Conclusion: It assists analysts in evaluation upcoming choices. The tree creates a visual
representation of all possible outcomes, rewards and follow-up decisions in one documents.

You might also like