Decision Tree Classification Algorithm
Decision Tree Classification Algorithm
Introduction of clustering,
K means clustering,
Multilayer perception,
Logistic Regression,
Neural networks,
Decision tree,
Regression tree,
Kernel function.
o In a Decision tree, there are two nodes, which are the Decision
Node and Leaf Node. Decision nodes are used to make any decision
and have multiple branches, whereas Leaf nodes are the output of
those decisions and do not contain any further branches.
o The decisions or the test are performed on the basis of features of the
given dataset.
o The logic behind the decision tree can be easily understood because it
shows a tree-like structure.
Leaf Node: Leaf nodes are the final output node, and the tree cannot be
segregated further after getting a leaf node.
Parent/Child node: The root node of the tree is called the parent node,
and other nodes are called the child nodes.
In a decision tree, for predicting the class of the given dataset, the algorithm
starts from the root node of the tree. This algorithm compares the values of
root attribute with the record (real dataset) attribute and, based on the
comparison, follows the branch and jumps to the next node.
For the next node, the algorithm again compares the attribute value with the
other sub-nodes and move further. It continues the process until it reaches
the leaf node of the tree. The complete process can be better understood
using the below algorithm:
o Step-1: Begin the tree with the root node, says S, which contains the
complete dataset.
o Step-3: Divide the S into subsets that contains possible values for the
best attributes.
o Step-4: Generate the decision tree node, which contains the best
attribute.
o Step-5: Recursively make new decision trees using the subsets of the
dataset created in step -3. Continue this process until a stage is
reached where you cannot further classify the nodes and called the
final node as a leaf node.
Example: Suppose there is a candidate who has a job offer and wants to
decide whether he should accept the offer or Not. So, to solve this problem,
the decision tree starts with the root node (Salary attribute by ASM). The root
node splits further into the next decision node (distance from the office) and
one leaf node based on the corresponding labels. The next decision node
further gets split into one decision node (Cab facility) and one leaf node.
Finally, the decision node splits into two leaf nodes (Accepted offers and
Declined offer). Consider the below diagram:
While implementing a Decision tree, the main issue arises that how to select
the best attribute for the root node and for sub-nodes. So, to solve such
problems there is a technique which is called as Attribute selection
measure or ASM. By this measurement, we can easily select the best
attribute for the nodes of the tree. There are two popular techniques for ASM,
which are:
o Information Gain
o Gini Index
1. Information Gain:
o Information gain is the measurement of changes in entropy after the
segmentation of a dataset based on an attribute.
o According to the value of information gain, we split the node and build
the decision tree.
Where,
o P(no)= probability of no
2. Gini Index:
o It only creates binary splits, and the CART algorithm uses the Gini
index to create binary splits.
A too-large tree increases the risk of overfitting, and a small tree may not
capture all the important features of the dataset. Therefore, a technique that
decreases the size of the learning tree without reducing accuracy is known
as Pruning. There are mainly two types of tree pruning technology used:
Now we will implement the Decision tree using Python. For this, we will use
the dataset "user_data.csv," which we have used in previous classification
models. By using the same dataset, we can compare the Decision tree
classifier with other classification models such
as KNN SVM, LogisticRegression, etc.
Steps will also remain the same, which are given below:
1. # importing libraries
2. import numpy as nm
4. import pandas as pd
5. #importing datasets
6. data_set= pd.read_csv('user_data.csv')
8. x= data_set.iloc[:, [2,3]].values
9. y= data_set.iloc[:, 4].values
In the above code, we have pre-processed the data. Where we have loaded
the dataset, which is given as:
2. Fitting a Decision-Tree algorithm to the Training set
Now we will fit the model to the training set. For this, we will import
the DecisionTreeClassifier class from sklearn.tree library. Below is the
code for it:
4. classifier.fit(x_train, y_train)
In the above code, we have created a classifier object, in which we have
passed two main parameters;
Out[8]:
DecisionTreeClassifier(class_weight=None, criterion='entropy',
max_depth=None,
max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False,
random_state=0, splitter='best')
Now we will predict the test set result. We will create a new prediction
vector y_pred. Below is the code for it:
2. y_pred= classifier.predict(x_test)
Output:
In the below output image, the predicted output and real test output are
given. We can clearly see that there are some values in the prediction vector,
which are different from the real vector values. These are prediction errors.
Advertisement
4. Test accuracy of the result (Creation of Confusion matrix)
In the above output, we have seen that there were some incorrect
predictions, so if we want to know the number of correct and incorrect
predictions, we need to use the confusion matrix. Below is the code for it:
Output:
In the above output image, we can see the confusion matrix, which
has 6+3= 9 incorrect predictions and62+29=91 correct predictions.
Therefore, we can say that compared to other classification models,
the Decision Tree classifier made a good prediction.
Here we will visualize the training set result. To visualize the training set
result we will plot a graph for the decision tree classifier. The classifier will
predict yes or No for the users who have either Purchased or Not purchased
the SUV car as we did in Logistic Regression. Below is the code for it:
9. mtp.ylim(x2.min(), x2.max())
14. mtp.xlabel('Age')
16. mtp.legend()
17. mtp.show()
Output:
The above output is completely different from the rest classification models.
It has both vertical and horizontal lines that are splitting the dataset
according to the age and estimated salary variable.
As we can see, the tree is trying to capture each dataset, which is the case of
overfitting.
8. mtp.xlim(x1.min(), x1.max())
9. mtp.ylim(x2.min(), x2.max())
14. mtp.xlabel('Age')
16. mtp.legend()
17. mtp.show()
Output:
As we can see in the above image that there are some green data points
within the purple region and vice versa. So, these are the incorrect
predictions which we have discussed in the confusion matrix.