Decision Tree Classification Algorithm (2)
Decision Tree Classification Algorithm (2)
o Decision Tree is a Supervised learning technique that can be used for both
classification and Regression problems, but mostly it is preferred for solving
Classification problems. It is a tree-structured classifier, where internal nodes
represent the features of a dataset, branches represent the decision
rules and each leaf node represents the outcome.
o In a Decision tree, there are two nodes, which are the Decision Node and Leaf
Node. Decision nodes are used to make any decision and have multiple
branches, whereas Leaf nodes are the output of those decisions and do not
contain any further branches.
o The decisions or the test are performed on the basis of features of the given
dataset.
o It is a graphical representation for getting all the possible solutions to a
problem/decision based on given conditions.
o It is called a decision tree because, similar to a tree, it starts with the root node,
which expands on further branches and constructs a tree-like structure.
o In order to build a tree, we use the CART algorithm, which stands
for Classification and Regression Tree algorithm.
o A decision tree simply asks a question, and based on the answer (Yes/No), it
further split the tree into subtrees.
o Below diagram explains the general structure of a decision tree:
Note: A decision tree can contain categorical data (YES/NO) as well as numeric data.
o Decision Trees usually mimic human thinking ability while making a decision,
so it is easy to understand.
o The logic behind the decision tree can be easily understood because it shows a
tree-like structure.
In a decision tree, for predicting the class of the given dataset, the algorithm starts
from the root node of the tree. This algorithm compares the values of root attribute
with the record (real dataset) attribute and, based on the comparison, follows the
branch and jumps to the next node.
For the next node, the algorithm again compares the attribute value with the other sub-
nodes and move further. It continues the process until it reaches the leaf node of the
tree. The complete process can be better understood using the below algorithm:
o Step-1: Begin the tree with the root node, says S, which contains the complete
dataset.
o Step-2: Find the best attribute in the dataset using Attribute Selection
Measure (ASM).
o Step-3: Divide the S into subsets that contains possible values for the best
attributes.
o Step-4: Generate the decision tree node, which contains the best attribute.
o Step-5: Recursively make new decision trees using the subsets of the dataset
created in step -3. Continue this process until a stage is reached where you
cannot further classify the nodes and called the final node as a leaf node.
Example: Suppose there is a candidate who has a job offer and wants to decide
whether he should accept the offer or Not. So, to solve this problem, the decision tree
starts with the root node (Salary attribute by ASM). The root node splits further into
the next decision node (distance from the office) and one leaf node based on the
corresponding labels. The next decision node further gets split into one decision node
(Cab facility) and one leaf node. Finally, the decision node splits into two leaf nodes
(Accepted offers and Declined offer). Consider the below diagram:
Attribute Selection Measures
While implementing a Decision tree, the main issue arises that how to select the best
attribute for the root node and for sub-nodes. So, to solve such problems there is a
technique which is called as Attribute selection measure or ASM. By this
measurement, we can easily select the best attribute for the nodes of the tree. There
are two popular techniques for ASM, which are:
o Information Gain
o Gini Index
1. Information Gain:
o Information gain is the measurement of changes in entropy after the
segmentation of a dataset based on an attribute.
o It calculates how much information a feature provides us about a class.
o According to the value of information gain, we split the node and build the
decision tree.
o A decision tree algorithm always tries to maximize the value of information
gain, and a node/attribute having the highest information gain is split first. It
can be calculated using the below formula:
Where,
2. Gini Index:
o Gini index is a measure of impurity or purity used while creating a decision
tree in the CART(Classification and Regression Tree) algorithm.
o An attribute with the low Gini index should be preferred as compared to the
high Gini index.
o It only creates binary splits, and the CART algorithm uses the Gini index to
create binary splits.
o Gini index can be calculated using the below formula:
A too-large tree increases the risk of overfitting, and a small tree may not capture all
the important features of the dataset. Therefore, a technique that decreases the size of
the learning tree without reducing accuracy is known as Pruning. There are mainly
two types of tree pruning technology used:
Decision Tree :
Decision tree is the most powerful and popular tool for classification and prediction. A
Decision tree is a flowchart like tree structure, where each internal node denotes a test on
an attribute, each branch represents an outcome of the test, and each leaf node (terminal
node) holds a class label.
would be sorted down the leftmost branch of this decision tree and would
therefore be classified as a negative instance.
In other words we can say that decision tree represent a disjunction of
conjunctions of constraints on the attribute values of instances.
Decision trees are less appropriate for estimation tasks where the goal is
to predict the value of a continuous attribute.
Decision trees are prone to errors in classification problems with many
class and relatively small number of training examples.
Decision tree can be computationally expensive to train. The process of
growing a decision tree is computationally expensive. At each node,
each candidate splitting field must be sorted before its best split can be
found. In some algorithms, combinations of fields are used and a search
must be made for optimal combining weights. Pruning algorithms can
also be expensive since many candidate sub-trees must be formed and
compared.
What is a Decision Tree?
They can determine the worst, best, and expected values for several scenarios
They perform well, even if the actual model violates the assumptions
3. It can help ecommerce companies in predicting whether a consumer is likely to purchase a specific
product.
1. Entropy: Entropy is the measure of uncertainty or randomness in a data set. Entropy handles how a
decision tree splits the data.
2. Information Gain: The information gain measures the decrease in entropy after the data set is split.
It is calculated as follows:
3. Gini Index: The Gini Index is used to determine the correct variable for splitting nodes. It measures
how often a randomly chosen variable would be incorrectly identified.
4. Root Node: The root node is always the top node of a decision tree. It represents the entire
population or data sample, and it can be further divided into different sets.
5. Decision Node: Decision nodes are subnodes that can be split into different subnodes; they contain
at least two branches.
6. Leaf Node: A leaf node in a decision tree carries the final results. These nodes, which are also
known as terminal nodes, cannot be split any further.
How Does a Decision Tree Algorithm Work?
Suppose there are different animals, and you want to identify each animal and classify them based on
their features. We can easily accomplish this by using a decision tree.
We have to determine which features split the data so that the information gain is the highest. We can
do that by splitting the data using each feature and checking the information gain that we obtain from
them. The feature that returns the highest gain will be used for the first split.
For our demo, we will take the following features into consideration:
We’ll use the information gain method to determine which variable yields the maximum gain, which
can also be used as the root node.
Suppose Color == Yellow results in the maximum information gain, so that is what we will use for
our first split at the root node.
Fig: Using Color == Yellow for our first split of decision tree
The entropy after splitting should decrease considerably. However, we still need to split the child
nodes at both the branches to attain an entropy value equal to zero.
We will split both the nodes using ‘height’ variable and height > 10 and height < 10 as our conditions.
The decision tree above can now predict all the classes of animals present in the data set.