Analysis of Various Decision Tree Algorithms For Classification in Data Mining PDF
Analysis of Various Decision Tree Algorithms For Classification in Data Mining PDF
15
International Journal of Computer Applications (0975 – 8887)
Volume 163 – No 8, April 2017
Decision trees can perform well even if assumptions are Classifying the continuous data may prove to be
somewhat violated by the dataset from which the data is expensive in terms of computation, as many trees have
taken. to be generated to see where to break the continuum.
One disadvantage of ID3 is that when given a large
2.3 Types of Decision Trees number of input values, it is overly sensitive to features
Decision trees used in data mining are mainly of two types: with a large number of values [2].
Classification tree in which analysis is when the 2.3.2 C4.5
predicted outcome is the class to which the data C4.5 is an algorithm used to generate a decision tree which
belongs. For example outcome of loan application as was also developed by Ross Quinlan. It is an extension of
safe or risky. Quinlan’s ID3 algorithm. C4.5 generates decision trees which
Regression tree in which analysis is when the predicted can be used for classification and therefore C4.5 is often
outcome can be considered a real number. For example referred to as statistical classifier [11]. It is better than the ID3
population of a state. algorithm because it deals with both continuous and discrete
Both the classification and regression trees have similarities as attributes and also with the missing values and pruning trees
well as differences, such as procedure used to determine after construction. C5.0 is the commercial successor of C4.5
where to split. because it is a lot faster, more memory efficient and used for
building smaller decision trees.C4.5 performs by default a tree
There are various decision trees algorithms namely pruning process. This leads to the formation of smaller trees,
ID3(Iterative Dichotomiser 3), C4.5, CART(Classification more simple rules and produces more intuitive interpretations.
and Regression Tree), CHAID(CHi- squared Automatic
Interaction Detector), MARS. Out of these, we will be C4.5 follows three steps in tree growth [3]:
discussing the more popular ones which are ID3, C4.5, For splitting of categorical attributes, C4.5 follows the
CART. similar approach to ID3 algorithms. Continuous
attributes always generate binary splits.
2.3.1 ID3(Iterative Dichotomiser) Selecting attribute with the highest gain ratio.
ID3 is an algorithm developed by Ross Quinlan used to These steps are repeatedly applied to new tree branches
generate a decision tree from a dataset [12]. To construct a and growth of the tree is stopped after checking of stop
decision tree, ID3 uses a top-down, greedy search through the criterion. Information gain bias the attribute with more
given sets, where each attribute at every tree node is tested to number of values. Thus, C4.5 uses Gain Ratio which is
select the attribute that is best for classification of a given set a less biased selection criterion.
[10]. Therefore, the attribute with the highest information gain
can be selected as the test attribute of the current node. ID3 is 2.3.2.1 Advantages of C4.5
based on Occam’s razor. In this algorithm, small decision C4.5 is easy to implement.
trees are preferred over the larger ones. However, it does not C4.5 builds models that can be easily interpreted.
always construct the smallest tree and is, therefore, a heuristic
algorithm [6]. It can handle both categorical and continuous values.
It can deal with noise and deal with missing value
For building a decision tree model, ID3 only accepts attributes.
categorical attributes. Accurate results are not given by ID3
when there is noise and when it is serially implemented. 2.3.2.2 Disadvantages of C4.5
Therefore data is preprocessed before constructing a decision A small variation in data can lead to different decision
tree [1]. For constructing a decision tree information gain is trees when using C4.5.
calculated for each and every attribute and attribute with the For a small training set, C4.5 does not work very well.
highest information gain becomes the root node. The rest
possible values are denoted by arcs. After that, all the 2.3.3 CART
outcome instances that are possible are examined whether It stands for Classification And Regression Trees. It was
they belong to the same class or not. For the instances of the introduced by Breiman in 1984. CART algorithm builds both
same class, a single name class is used to denote otherwise the classification and regression trees. The classification tree is
instances are classified on the basis of splitting attribute. constructed by CART by the binary splitting of the attribute.
Gini Index is used as selecting the splitting attribute. The
2.3.1.1 Advantages of ID3 CART is also used for regression analysis with the help of
The training data is used to create understandable regression tree. The regression feature of CART can be used
prediction rules. in forecasting a dependent variable given a set of predictor
It builds the fastest as well as a short tree. variable over a given period of time. CART have an average
ID3 searches the whole dataset to create the whole tree. speed of processing and supports both continuous and
It finds the leaf nodes thus enabling the test data to be nominal attribute data.
pruned and reducing the number of tests.
The calculation time of ID3 is the linear function of the 2.3.3.1 Advantages of CART
product of the characteristic number and node number CART can handle missing values automatically using
[9]. surrogate splits.
16
International Journal of Computer Applications (0975 – 8887)
Volume 163 – No 8, April 2017
17
International Journal of Computer Applications (0975 – 8887)
Volume 163 – No 8, April 2017
𝑚 2
Gini(D)=1- 𝑖=1 𝑝i =0.443
Where pi is the probability that a tuple in D belongs to class =Ginisalary ϵ{high}(D).
Ci and is estimated by |Ci,D|/|D|. The sum is computed over m
We calculated the Gini index values for the other subsets also
classes. The attribute that reduces the impurity to the
and the result was 0.458 for the subset({low, high} and
maximum level (or has the minimum gini index) is selected as
{medium}) and it was 0.450 for the subset({medium, high}
the splitting attribute.
and {low}).
4. ILLUSTRATION SHOWING Therefore, the best binary split for salary attribute was found
ATTRIBUTESELECTION to be on ({low, medium} or {high}) because it minimizes the
MEASURES Gini index and has the value of 0.443.
In this paper, we have used the database of an Electronic store The attribute age when split over the subset({youth, senior})
to see whether a person buys a laptop or not. Figure 1 shows gives the minimum Gini index overall, with a reduction in
table having class-labeled training tuples from the electronic impurity of 0.459-0.357 = 0.102. Now according to the Gini
store. Each attribute taken is of a discrete value. The class- index, the binary split “age ∈ {youth, senior?} becomes the
labeled attribute buys _laptop, has two distinct values (yes, splitting criterion as it results in the maximum reduction in
no). Therefore there are two distinct classes and the value of impurities of tuples in D.
m is equal to 2.
Thus, the database of Electronics store shows that the attribute
We assume: age has the maximum or highest Information Gain and that
the age attribute also has the minimum Gini index, therefore,
Class P: buys_laptop = “yes”
resulting in a maximum reduction in impurity of the tuple in
Class N:buys_laptop=”no” this. Thus the decision tree for the given data is formed in
As there are 9 yes and 5 no in the buys_laptop attribute, Figure 3. by taking age as the splitting attribute.
therefore 9 tuples belong to class P and 5 tuples belong to
class N.
Entropy is calculated as:
9 9 5 5
Entropy(D)= -14 log2 - 14 log2 =0.940
14 14
5 4
=Entropy(D)- 14 Entropy(Syouth)- 14 Entropy(Smidddle-aged)-
5
Entropy(Ssenior)
14
Figure 1. Class‐labeled labeled training tuples from the
Gain(salary,D)=0.029 Electronics Store database.
Gain(graduate,D)=0.151
Gain(credit_raiting,D)=0.048
18
International Journal of Computer Applications (0975 – 8887)
Volume 163 – No 8, April 2017
The attribute age has the highest information gain and thus We found that both C4.5 and CART are better than ID3 when
becomes the splitting attribute at the root node of the decision missing values are to be handled whereas ID3 cannot handle
tree. Branches are grown for each outcome of age. These missing or noisy data. But we also analyzed that ID3
tuples are shown partitioned according to the age. produces faster results. The paper also gives an idea of the
attribute selection measure used by various decision trees
A decision tree for the concept buys _laptop, indicating algorithms like ID3 algorithm uses information gain, the C4.5
whether a customer at an electronic store is likely to buy a algorithm uses gain ratio and CART algorithm uses GINI
laptop or not is shown in Figure 3. Each internal (non-leaf) Index as the attribute selection measure. The paper also gives
node of the decision tree represents a test on an attribute. Each the methods for calculation of these attribute selection
leaf node of the decision tree represents a class (either measures. In all, we find that these algorithms for decision
buys_laptop=”yes” or busy _laptop=” no” ). tree induction are to be used at different times according to
the situation.
7. REFERENCES
[1] Anuj Rathee and Robin Prakash Mathur, “Survey on
Decision Tree Classification algorithms for the
evaluation of Student Performance”,(IJCT),ISSN:2277-
3061,March-April,2013.
[2] Badr HSSINA, Abdelkarim MERBOUHA, Hanane
EZZIKOURI, Mohammed ERRITALI,“A comparative
study of decision tree ID3 and C4.5”, (IJACSA).
[3] Devinder Kaur, Rajiv Bedi and Dr. Sunil Kumar Gupta,
“Implementation of Enhanced Decision Tree Algorithm
on Traffic Accident Analysis”, (IJSRT), ISSN: 2379-
3686, 15th September 2015.
[4] G.Kesavaraj, Dr. S.Sukumaran, “A Study On
Figure 3. A decision tree for the concept buys _laptop in Classification Techniques in Data Mining”, IEEE-31661,
an electronic store. July 4-6, 2013.
5. APPLICATIONS OF DECISION [5] Han J., Kamber M., and Pei J. (2012) Data Mining:
Concepts and Techniques, 3rd edition. The Morgan
TREES IN VARIOUS AREAS OF Kaufmann Series in Data Management Systems, Jim
DATA MINING Gray, Series Editor.
The various decision tree algorithms find a large application
in real life. Some areas of application include: [6] Hemlata Chahal, “ID3 Modification and Implementation
in Data Mining”, International Journal of Computer
E-Commerce: Used widely in the field of e-commerce, Applications (0975-8887), Volume 80-No7, October
decision tree helps to generate online catalog which is a 2013.
very important factor for the success of an e-commerce
website. [7] Jatinder Kaur and Jasmeet Singh Gurm, “Optimizing the
Industry: Decision Tree algorithm is very useful for Accuracy of CART algorithm by Using Genetic
Algorithm”, (IJST), Volume 3 Issue 4, Jul-Aug, 2015.
producing quality control(faults identification) systems.
Intelligent Vehicles: An important task for the [8] M.S. Mythili and Dr. A.R.Mohamed Shanavas, “An
development of intelligent vehicles is to find the lane Analysis of students’ performance using classification
boundaries of the road. Gonzalez and Ozguner have algorithms”, (IOSR-JCE), e-ISSN:2278-0661,p-
proposed lane detection for intelligent vehicles using ISSN:2278-8727, Jan. 2014.
decision trees.
Medicine: Decision Tree is an important technique for [9] Qing-Yun Dai, Chun-ping Zhang and Hao Wu,“Research
medical research and practice. A decision tree is used of Decision Tree Classification Algorithm in Data
for diagnostic of various diseases. And is also used for Mining”, International Journal of Database Theory and
hard sound diagnosis. ApplicationVol.9, No.5(2016),pp.1-8.
Business: Decision Trees also find use in the field of [10] T.Miranda Lakshmi, A.Martin, R.Mumtaj Begum, and
business where they are used for visualization of Dr. V.Prasanna Venkatesnan,“An Analysis on
probabilistic business models, used in CRM(Customer Performance of decision Tree Algorithms using
Relationship Management) and used for credit scoring Student’s Qualitative Data”, I.J. Modern Education and
for credit card users and for predicting loan risks in Computer Science, June 2013.
banks.
[11](2017, March 4), C4.5[Online].Available:
6. CONCLUSION http://en.wikipedia.org/wiki/C4.5_algorithm.
This paper analyses various decision tree algorithms that are [12](2017, March 4), ID3[Online].Available:
used in data mining. We found that each algorithm has got its http://en.wikipedia.org/wiki/ID3_algorithm.
own advantages and disadvantages as per our study. The
efficiency of various decision tree algorithms can be analyzed [13](2017, March 4), Random
based on their accuracy and the attribute selection measure Forest[Online].Available:http://en.wikipedia.org/wiki/Ra
used. The efficiency of the algorithms also depends on the ndom_Forest.
time taken information of the decision tree by the algorithm.
IJCATM : www.ijcaonline.org
19