0% found this document useful (0 votes)
29 views1 page

The C4.5

The C4.5(J48) algorithm can generate both decision trees and rulesets to classify data and enhance prediction accuracy. It extracts easily understandable rules from the decision tree with no required parameter setting or field learning. J48 is an optimal implementation of C4.5 that can handle continuous data, missing data, and avoids overfitting issues compared to the ID3 algorithm. The C4.5(J48) algorithm uses information gain ratio to build the decision tree and can handle continuous and missing data.

Uploaded by

Arnab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views1 page

The C4.5

The C4.5(J48) algorithm can generate both decision trees and rulesets to classify data and enhance prediction accuracy. It extracts easily understandable rules from the decision tree with no required parameter setting or field learning. J48 is an optimal implementation of C4.5 that can handle continuous data, missing data, and avoids overfitting issues compared to the ID3 algorithm. The C4.5(J48) algorithm uses information gain ratio to build the decision tree and can handle continuous and missing data.

Uploaded by

Arnab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1

The C4.

5(J48)
The C4.5(J48) is a type of algorithm by which it can generate both the decision tree as well as its
rulesets. Other than that, it builds the tree in such a way that it can enhance the prediction
accuracy. Moreover, the models which are derived from the C4.5(J48) can be easily understood
because the rules which are extracted from the technique have a very explicit uncomplicated
interpretation and have the advantage that it neither needs any field learning or parameter
setting. By using this algorithm the researcher can easily detect the most useful variables on the
target he predicted. J48 is the optimal implementation for C4.5 rev. 8 technique and it is the
own version of the WEKA toolkit package that will be used in this study. The most common
conventional decision tree algorithm is ID3, though it is the most common algorithm it has
some limitations. Attributes must be normal values, the missing data cannot be included in the
dataset and finally, the algorithm tends to full into over fitting. This algorithm can be used for
creating more generalized models, including continuous data and could handle missing data.
This algorithm works utilizing the increase proportion. Other than that, it also assists with
working with the constant information. Besides, it can work making unremitting information
into normal information.

You might also like