17 - Chapter 9

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Chapter 9

Conclusions

In the real world data grow exponentially and it is practically impossible to benefit

from this data without mining or extracting useful rules or interesting patterns

from the data. The data mining in tum utilizes many techniques. Any single

technique is not found suitable for all kinds of data and for all types of domains

(No free lunch theorem holds true). Hybrid techniques have been designed to

enhance the performances of the hybrid algorithms as compared to their pure

counterparts. The main objective of this thesis is to design and investigate a

hybridized approach based on rough set theory and classical decision tree

induction. In this chapter, we summarize the experimental results pertaining to the

work and present the conclusions with directions for future research.

9.1 Summary
The thesis facilitates an important task of data mining namely classification using

decision tree induction. RDT framework, the hybridization of the rough set theory

and the decision tree induction, is proposed to address various data mining issues

for classification. The performance of the RDT has been investigated to handle

issues of large number of correlated categorical attributes, missing values,

237
inconsistency, continuous attributes, large datasets, and noisy domains. In the first

phase, the benchmarking datasets from UCI repository have been used for the

experiments. However, as benchmarking datasets have been overused by the

research community and their corresponding algorithms have the chances of being

fine tuned to perform well on these datasets, therefore in the second phase RDT

model is also employed to mine two real world datasets from the agricultural

domain. Machine learning based data mining applications for agricultural domain

have not been in focus, hence useful applications need to be identified and

initiated for this domain.

After an extensive survey of rough set theory, global decision relative

reduct was identified to be the one important constituent of the hybridized

approach. From the review on decision tree induction ID3 and C4.5 algorithms

were adjudged suitable as the base for the hybridization. Using these two

important constituents, the architecture of RDT framework was formulated

followed by the implementation of the corresponding RDT algorithm. At the

outset, for a feasibility study of RDT, experiments were conducted on small

datasets. The performance parameters accuracy and complexity were considered

for comparison of classifiers obtained for the small datasets. Some additional

performance parameters namely number of rules and number of attributes required

in the resulting classifier were also given due weights for the moderate or the large

size datasets. The selections of datasets for further study were based on the issues,

as identified, to be addressed. The Mushroom dataset with a large number of

instances includes 23 conditional attributes all with nominal values accompanied

by some missing values too. A speculation regarding correlation among attributes

in this dataset of large number of attributes also prevailed. Further Iris, Vehicle,

238
Australian-credit-card, Adult and Cover type datasets were selected to deal with

the problems of attributes with continuous and missing values using rough set

based hybridized framework. Nutrition dataset is a real time dataset. The

assumption is that the dataset includes all kinds of noise, inconsistency or any

other imperfection that usually occur in the real world. To address issues in such

datasets dynamic RDT was employed. Finally RDT was tested for prediction of

epidemic outbreak in mango using another real dataset including the data over a

period of eleven years. At a prior stage this problem has been studied by the

statisticians by using logistic regression. Results of the experiments were

permitted to be used to compare the performance of the RDT.

There is no algorithm that dominates all other algorithms for all types of

domains. For every problem, there is no such perfect algorithm that guesses the

target function with no error. The best that can be hoped for is to understand the

strengths and limitations of different algorithms. In practice one is interested in

the best algorithm for the data being studied and not an average performance

across many domains. Therefore, based on the background knowledge of a given

domain, a suitable algorithm is desired be recommended. The approach adopted

here is to execute various models for classification and estimate the accuracy,

complexity, size of the rule-set and the number of attributes required for each of

them. In addition to this, the cumulative score may be utilized for the comparisons

among algorithms and for the recommendation of a suitable one.

No system is foolproof. The performance of RDT was not always the best

in terms of accuracy. However, the experimental results on the representative

datasets from a domain can not disprove the validity of the RDT model because

the difference in performance of RDT and the performance of the standard

239
decision tree induction algorithm was insignificant. Implementation of RDT

algorithm is automated with the help of interface programs, script files and batch

files which are not as efficient as the fully automated module for RDT model

would have been. However, it helped in repetition of experiments by changing

parameters as well as to obtain consistent results in future for a selected set of

parameters. For the evaluation of the performance of algorithms, computation of

Cumulative Score (CS) requires assigning weights corresponding to each of the

performance parameters. Hence, some domain knowledge and user's preferences

are required on the part of the user to use CS for performance comparison. The

results of classification using the RDT model and its variants are compared with

the results obtained from the rough set approach, the decision tree induction

algorithms by Quinlan and logistic regression method (for predictive modelling).

Other models based on neural networks or genetic algorithms are not attempted.

For predictive modelling determination of ideal optimum size of dataset and

conditions for updating the prior predictive model was not possible at this stage

due to non availability of more instances of the data. As evident from the removal

of redundant attributes at a prior stage, RDT model facilitates lesser memory

requirements during the decision tree induction as compared to the classical

decision tree induction. However, detailed analysis regarding estimation of the

memory requirement was outside the scope of the dissertation.

Based on the experiments, we present the following contributions and

conclusions:

1. A new hybridized model for classification namely RDT, based on rough set

theory and decision tree induction, is proposed. For a decision system that has

n objects characterized by m attributes, computational complexity of the RDT

240
is O(m2n log n)+ O(n( log n/. However, RDT removes irrelevant attributes at

a stage prior to decision tree induction and facilitates lesser memory

requirements corresponding the subsequent steps while executing the model,

and to classify the test data as well as the actual examples.

2. The accuracy, the complexity, the number of rules, and the number of

attributes in the resulting classifier are identified as the performance

parameters for comparison of algorithms. It may so happen that one algorithm

is not monotonically decisive with respect to all the performance parameters.

To deal with such a case, the formulation of Cumulative Score (CS) mainly for

comparison and ranking of classification algorithms is proposed. Thus CS

facilitates identification of a suitable algorithm by assigning corresponding

weights to each of the performance parameters.

3. For datasets having large number of correlated attributes, a large number of

reducts might exist. In order to fine tune the RDT to extend its applicability to

such datasets too, the concept of approximate core is proposed along with the

algorithm for its computation. All possible reducts, denoted by R, are obtained

using efficient GA heuristics. Let A be the attribute set corresponding R. The

approximate core is employed to yield a simplified classifier. The

computational complexity of approximate core is 0(1 RI) + 0(1 AI log I A1).

4. The rough set based discretization offers additional utility of rough set theory

within the purview of RDT to address issue of continuous attributes. It

compares well with the popular standard algorithm for handling continuous

attributes namely C4.5. The experimental results obtained from using RDT

and its variants exhibited the potential to trade off between complexity and

accuracy as per the requirements.

241
5. Dynamic RDT model is based on dynamic reduct and is observed to be

suitable to handle real world datasets containing noise. The algorithm for the

computation of dynamic reduct is also introduced. Based on the sampling

strategy of cross-validation, the proposed dynamic RDT opens a promising

avenue to mine large datasets and for incremental learning.

6. For the real world prediction problem, RDT algorithms are observed to

perform better as compared to logistic regression and the base algorithms of

the RDT. The resulting classifier obtained from hybrid framework is simple

and easy to interpret as compared to the statistical coefficients as used in the

statistical methods of prediction.

Our experiments in the dissertation suggest that RDT or any of its variants

i.e. the RDTcoreu, DJU, DJP, RJU, RJP, DRJU or DRJP, outperforms other base

algorithms. The results indicate the suitability of RDT model to yield a classifier

with specific features. Finally, the dissertation attempts to contribute towards

~exploring the potential of rough set theory to hybridize it with the conventional

induction methods for an optimal performance.

9.2 Future Research


The research never ends. More experiments are required to be carried out on large

datasets from real world domains. Experimental results obtained from real world

agricultural data were appealing. In view of the positive indicators pertaining to

the predictive modelling using RDT, more applications for agricultural domains

need to be identified and addressed. A detailed comparative study of prevalent

statistical methods for prediction, and RDT and its variants or other machine

learning algorithms, may increase the credibility base of the proposed RDT

model.

242
References
[AIS93] Agrawal, R., Imielinski, T. and Swami, A., Mining Association
Rules between Sets of Items in Large Databases, In Proceedings of
the ACM SIGMOD International Conference on Management of
Data, 1993
[AP96] Ali, K. and Pazzani, M., Error reduction through learning multiple
descriptions. Machine Learning , 24(1 ): 173-202, 1996

[AR03] Adhiguru, P., Ramasamy, C., Agricultural-based Interventions for


Sustainable Nutritional Security, Policy Paper 17, NCAP, New
Delhi, India, 2003

[ARS98] Alsabti, K., Ranka, S., Singh, V., CLOUDS: A Decision Tree
Classifier for large datasets, In Proceedings of the International
Conference on Knowledge Discovery and Data Mining, New York,
August, 1998

[AS94] Agrawal, R., Srikant, Fast Algorithms for ·Mining Association


Rules, In Proc. of the 20th Int'l Conference on Very Large
Databases, Santiago, Chile, Sept. 1994
[AS96] Aasheim, 0. T., Solheim, H. G., Rough Sets as a Frame work for
Data Mining, Project Report, The Norwegian University of Science
and Technology, 1996

[Att94] Attar, A. A., White Paper: A Hybrid GA-Heuristic Search Strategy,


AI Expert USA, September 1994

[Baz96] Bazan, J. G, Dynamic reducts and statistical inference, In


Proceedings of the Sixth International Conference, Information
Procesing and Management of Uncertainty in Knowledge-Based
Systems (IPMIU'96), 3, 1996

[Baz98] Bazan, J. G., A Comparison of Dynamic and Non-Dynamic Rough


Set Methods for Extracting Laws from Decision Tables, In
Polkowski, L. and Skowron, A. (Eds.), Rough Sets in Knowledge
Discovery 1, Methodology and Applications, Chapter 17, Physica-
Verlag, 321-365, 1998

[BBEZOO] Bengio, B., Buhmann J. M., Embrechts M. J. and Zurada J. M.,


Introduction to the Special Issue on Neural Network for Data
Mining and Knowledge Discovery, IEEE Transactions on Neural
Network 11(3), 2000

[BC03] Bagnall, A. J., Cawley, G. C., Learning classifier Systems for Data
Mining: A Comparison of XCS with Other Classifiers for the Forest
Cover Data Set, In Proceedings of the IEEE/INNS International

243
Joint Conference on Artificial Neural Networks (IJCNN-2003),
Portland, Oregon, USA, 3:1802-1807, July 2003

[BD99] Blackard, J. A., Dean, D. J., Comparative Accuracies of Artificial


Neural Networks and Discriminant Analysis in Predicting Forest
Cover Types from Cartographic Variables, Computer and
Electronics in Agriculture, 24:131-151, 1999

[BDP94] Bala, J. W., DeJong, K., Pachowicz, P. W., Multistrategy Learning


from Engineering Data by Integrating Data, Michalski, R. and
Tecuci, G. (Eds.) Machine Learning: A Multistrategy Approach
Chapter 18, Vol IV, San Francisco: Morgan Kaufmann, 471-487,
1994

[BFOS84] Breiman, L., Friedman, J. H., Olshen, R. A. and Stone, C. J.,


Classification and Regression Trees, Wadsworth 1984

[BK97] Bjorvand, A. T., Komorowski, J., Practical Applications of Genetic


Algorithms for Efficient Reduct Computation., Wissenschaft &
Technik Verlag, 4: 601-606, 1997

[BLOO] Berry, M. J. A. and Linoff G. S., Mastering Data Mining, John


Wiley & Sons, Inc., 2000

[BMP98] Banerjee M., Mitra S. and Pal S. K., Rough Fuzzy MLP:
Knowledge Encoding and Classification, IEEE Transactions on
Neural Networks, 9:1203-1216, 1998

[BN89] Buntine, W., Niblett, T., A Further Comparison of Splitting Rules


for Decision Tree Induction, Machine Learning, 3:7 5-85, 1989

[Bon60] Boneau, C. A., The effects of violations of assumptions underlying


the t test, Psychological Bulletin, 57:49-64, 1960

[Bre96] Breiman, L., Bagging predictors, Machine Learning, 24(2):123-140,


1996

[BSS94] Bazan, J. G., Skowron, A., Synak, P., Dynamic Reducts as a Tool
for Extracting Laws from Decision Tables, In Proc. International
Symposium on Methodologies fro Intelligent Systems, LNCS
Springer-Verlag, 869:346-3 55, 1994

[CAC99] Cercone, N., An, A., Chan C., Rule Induction and Case based
Reasoning: Hybrid Architectures Appear Advantageous, IEEE
TKDE, 11(1):166-174, 1999
'
[Cat91] Catlett, J., On Changing Continuous Attributes into Ordered
Discrete Attributes, In Y. Kodrtoff (Eds.) EWSL-91, Lecture Notes
in AI Springer Verlag, Berlin, Germany, 482:164-178, 1991

[CBB02] Collobert, R., Bengio, S., Bengio, Y., A Parallel Mixture of SVMs
for Very Large Scale Problems, Neural Computation 14:1105-1114,
2002

244
[CCW90] Chiu, D. K. Y., Cheung, B., Wong, A. K. C., Information Synthesis
Based on Hierarchical Entropy Discretization, Journal of
Experimental and Theoretical Artificial Intelligence 2: 117-129,
1990

[CG94] Chmielewski, M. R., Grzymala-Busse, J. W., Global Discretization


of Continuous Attributes as Preprocessing for Machine Learning,
International Journal of Approximate Reasoning 11, 1994

[CN86] Clark, P., Niblett, T., Induction in noisy domains, Expert Systems,
UK, 1986

[CS93] Chan, P. K., Stolfo, S. J., Experiments on Multistrategy Learning


by Meta-Learning, In Procd of 2nd Inti. Conf. on Information and
Knowledge Management, 314-323, 1993

[DB95] Dietterich, T. G., Bakiri, G., Solving Multi-Class Learning


Problems by Error Correcting Output Codes, Journal of Artificial
Intelligence Research, 2:263-286, 1995

[DJ80] Dubes, R. and Jain, A. K., Clustering Methodologies in Exploratory


Data Analysis, Advances in Computers 19: 113-228, 1980

[DK01] Dong, M., Kothari, R., Look-Ahead Based Fuzzy Decision Tree
Induction, IEEE Transactions on Fuzzy Systems 9(3):461-468,
2001

[DK82] Dietrich, J. R., Kaplan, R. S., Empirical Analysis of the


Commercial Loan Classification Decision, Accounting Review
58(1):18-31, 1982

[DKS95] Dougherty, J., Kohavi, R., Sahami, M., Supervised and


Unsupervised Discretization of Continuous Features, Machine
Learning: Proceedings of Twelfth International Conference,
Morgan Kaufmann, Los Altos, CA, 1995
[DRS97] Deogun , J.S., Raghvan, V.V., Sever, H., D?ta Mining: Research
Trends, challenges and Applications, InT. Y. Lin and N. Cercone,
Eds., Roughs Sets and Data Mining: Analysis of Imprecise Data, 9-
45, Boston, MA, Kluwer Academic Publishers, 1997
[ET93] Efron, B., Tibshirani, R., An Introduction to Bootstrap, Chapman'
and Hall, London, 1993
[Fay96] Fayyad, U., Data Mining and Knowledge Discovery: Making Sense
out of Data, IEEE Expert, Oct. 20-25, 1996

[FI92] Fayyad, U. M., Irani, K. B., On The Handling of Continuous -


Valued Attributes in Decision Tree Generation, Machine Learning
.8:77-102, 1992
[FI93] Fayyad, U. M., Irani, K. B., Multi-Interval Discretization of
Continuous-Valued Attributes for Classification Learning, In Proc.
of the 13th International Joint Conference on Artificial Intelligence,

245
Morgan Kaufmann I 022-1027, 1993

[FSS96a] Fayyad, U. Shapiro, G. P. and Smyth P., The KDD Process for
Extracting Useful Knowledge from Volumes of Data,
Communications ofthe ACM 39(11):27-34, 1996
[FSS96b] Fayyad U., Shapiro, G. P. and Smyth P., From Data Mining to
Knowledge Discovery in Databases, AI magazine 37-54, 1996
[FTS02] Francis, E., Tay, H., Shen, L., A Modified Chi2 Algorithm for
Discretization, IEEE Transaction on Knowledge and Data
Engineering, 14(3):666-670, 2002
[FU96] Fayyad, U. and Uthurusamy, R., Data Mining and Knowledge
Discovery in Databases, Communications of the ACM, 39(11):24-
26, 1996
[Gam99a] Gama, J., ·Combining Classification Algorithms. Ph.D. Thesis,
University of Porto, 1999
[Gam99b] Gama, J., Discriminant Trees, In Bratko I. and Dzeroski S. (Eds.),
Procd of the 16th ICML '99. Morgan Kaufmann 134-142, 1999
[GB99] Gama J. and Brazdil, P., Linear Tree, lntell~gent Data Analysis
3(1): 1-22, 1999
[GGRL99] Gehrke, J. E., Ganti, V., Ramkrishnan R., Loh, W. Y., BOAT-
Optimistic Decision Tree Construction, In Proceedings of
SIGMOD, 1999
[Gol89] Goldberg, D.E. Genetic Algorithms in Search Optimization and
Machine Learning, Addison-Wesley, 1989

[GRG98] Gehrke, J. E., Ramkrishnan R., Ganti, V., RAINFOREST:A


Framework for Fast Decision Tree Construction for Large Data
Sets, In proceedings of the 24th International Conference on Very
Large Databases, New York 1998

[Grz02] Grzymala-Busse, J. W., C3.4-Discretization of Numerical


Attributes, In Handbook of DM and Knowledge Discovery Oxford,
218-225,2002 .

[GS01] Grzymala-Busse, J. W., Stefanowski J., Three Discretization


Methods for Rule Induction, International Journal of Intelligent
Systems, 16(1):29-38, 2001

[GS96] George, R., Srikanth, R., Data Summarization using Genetic


Algorithms and Fuzzy Logic, In Herrera, F. and Verdegay, J. L.
(Eds.) Genetic Algorithms and Soft Computing Heidelberg,
Germany: Physica Verlag, 599-611, 1996
[GTS98] Gama, J., Torgo, L., Soares, C., Dynamic Discretization of
Continuous Attributes, In Procd of the 6th lbero-American
Conference on AI: Progress in Artificial Intelligence, LNCS 160-

246
169, 1998

[GZ03] Grzymala-Busse, J.W., Ziarko, W., Data Mining Based on Rough


Sets, in Data Mining: Opportunities and Challenges, ed. by John
Wang, Idea Group Publ., 142-I73, 2003

[Hal02] Hall, Mark A., Holmes, G., Benchmarking Attribute Selection


Techniques· for Discrete Class Data Mining. IEEE TKDE 20: 1-I6,
2002

[Han81] Hand, D. J., Discrimination and Classification New York John


. /

Wiley, I981

[HC96] Hu, X. and Cercone, N., Mining Knowledge Rules from Databases:
a Rough Set Approach, In Procd. 1ih Intlernationa. Conference
Data Engg., Washingaton 96-105, 1996

[HH61] Hunt, E. B., Hovland, C. 1., Programming a Model of Human


Concept Formation, Proceedings of the Western Joint Computer
Conference I45-155, 1961

[HK01] Han, J., Kamber, M. Data Mining Concepts and Techniques,


Morgan Kaufmann Publisher, 2001

[HKKOO] Han, E., Karypis G. and Kumar V., Scalable Parallel Data Mining
for Association rules, IEEE ITKDE 12(3):337-352, 2000

[HMS01] Hand, D., Mannila, H., Smyth, P., Principles of Data Mining,
Prentice Hall of India, 200 I

[HMS66] Hunt, E. B., Marin, J., Stone, P. J., Experiments in Induction, San
Diego CA: Academic Press, 1966

[Hol93] Holte, R.C., Very Simple Classification Rules Perform Well on


Most Commonly Used Data Sets, Machine Learning I1 :63-90,
I993
[HS96] Hoa, N. S., Son, N. H., Some Efficient Algorithms for Rough Set
Methods, In Proceedings" of the sixth international Conference,
Information Processing Management of Uncertainty in Knowledge-
Based Systems (IPMU-96), July 1-5, Spain, 1996
[HTF01] Hastie, T and Tibshirani, R, and Friedman, The Elements of
Statistical Learning: Data Mining, Inference and Prediction,
Springer, 200 I
[IC03] Ismail, M. K., Ciesielski, V., An Empirical Investigation of the
Impact of Discretization on Common Data Distributions, In Procd
HIS 2003: Design and Applications of Hybrid Intelligent Systems,
lOS Press, 692-701,2003
[IM93] Imam, I. F. and Michalski, R. S., Should Decision Trees Be
Learned from Examples or from Decision Rules?, In Proceedings of
the 7th International Symposium on Methodologies for Intelligent

247
Systems, ISMIS, Lecture Notes in Artificial Intelligence, Springer
Verlag Trondheim, Norway, June 15-18, 1993

[JM03a] Jain R., Minz S., Should Decision Trees Be Learned Using Rough
Sets ? In Procd. F 1 Indian International Conference on Arti.fical
Intelligence (IICAI-03), Hyderabad, India, 1466-1479, 2003

[JM03b] Jain, R., Minz, S., Classifying Mushrooms in the Hybridized Rough
Sets Framework, In Proceed. F 1 ·Indian International Conference
on Arti.fical Intelligence (IICAI-03), 554-567, 2003

[Joh74] Johnson, D. S., Approximation Algorithms for combinatorial


problems, Journal of Computer and System Sciences, 9:256-278,
1974

[JW02] Johnson, R. A., Wichern, D. W., Applied Multivariate Statistical


Analysis, Pearson Education Asia, 2002

[Kas75] Kass, G. V., Significance Testing in Automatic Interaction


Detection (A.I.D.), Applied Statistics, 24:178-189, 1975
[Kas80] Kass, G. V., An Exploratory Technique for Investigating Large
Quantities of Categorical Data, Applied Statistics, 29:1:19-127,
1980

[KB84] Kononenko, I., Bratko, I., Roskar E., Experiments in Automatic


Learning of Medical Diagnostic Rules, Technical Report, Jozef
Stefan Institute, Ljubljana, Yugoslavia, 1984
[KK98] Kwedio, W., Kretowski, M., Learning Decision Rules Using an
Evolutionary Algorithm and Entropy-Based Discretization,. JJS VII
proceedings ofthe Workshop, 1998

[Koh95] Kohavi, R., A Study of Cross-Validation and Bootstrap for


Accuracy Estimation and Model Selection. In Proc. Fourteenth
International Joint Conference on Artificial Intelligence, Morgan
Kaufmann, 1137-1143,1995
[Koh96] Kohavi R., Scaling up the Accuracy of Nai."ve-Bayes Classifiers: A
Decision Tree Hybrid, In Simoudis E. & Han J. (Eds.), KDD-96:
Proceed. 2nd Inti. Conf. on Knowledge Discovery and Data Mining
202-207, 1996 \
[KP99] Kiem H., Phuc D., Using Rough, Genetic and Kohonen's Neural
Network for Conceptual Cluster Discovery in data mining," In
Proc. RSFDGrC 99, Yamaguchi, Japan, 448-452, 1999
[KPPS99] Komorowski, J., Pawlak, Z., Polkowki, L., Skowron, A., Rough
Sets: A Tutorial, In Pal S. K., Skowron, A.(Eds.), Rough Fuzzy
Hybridization Springer, 3-99, 1999
[Krz77] Krzanowski, W. J. The Performance of Fisher's Linear
Discriminant Function Under Non-Optimal Conditions.,

248
Technomettrics, 19(2): 191-200, 1977
[KS96] Kohavi, R., Sahami, M., Error based and Entropy-based
Discretization of Continuous Features, In Proceedings of the 2nd
International conference on Knowledge Discovery and Data
Mining, Menlo Park, AAAI Press, 114-119

[Lan93] Langley P., Induction of Recursive Bayesian Classifiers, In Brazdil


P.B.(Eds.), Machine Learning: ECML-93, Springer, 153-164, 1993

[LHM98] Liu, B., Hsu, W., Ma, Y., Integrating Classification and Association
Rule Mining, In Proceedings of the Fourth International
Conference on Knowledge Discovery and Data Mining (KDD-98),
New York, USA, 1998

[LHW04] Liu X., Hall, L. 0., Bowyer, K. W., Comments on a Parallel


Mixture of SVMs for Very Large Scale Problems, Neural
Computation 16 (7), 1345-1351, 2004
[Lin02] Lin, T. Y., Attribute Transformations for Data Mining 1:
Theoretical Explorations, International Journal of Intelligent
Systems 17: 213-222, 2002
[LP93] Lenarcik, A., Piasta, Z., Probabilistic Approach to Decision
Algorithm Generation in the Case of Continuous Condition
Attributes, Foundations of Computing and Decisions Sciences
18(3-4):213-223, 1993

[LP97] Lenarcik, A., Piasta, Z., Probabilistic Rough Classifiers with


Mixture of Discrete and Continuous Attributes, In Lin, T. Y.,
Cercone, N. (Eds.), Rough sets and Data mining: Analysis of
Imprecise D1ta, Kluwer Academic Publishers, Boston, 1997t:

[LSL95] Lu, H., Setiono, R., Liu, H., Neuro Rule: A Connectionist Approach
to Data Mining, In Procd. 21st VLDB Conference, 1995

[LT95] Liu, H., Tan, S. T., X2R: A Fast Rule Generator, In Procd. IEEE
Inti. Conference on systems, Man and Cybernetics 1995
[MA95] Murphy P.M., Aha D.W., UCI repository of machine learning
databases, www. cs. uci. edu!mlearn/MLRepository.html University of
California, Irvine, 1995
[MJ03a] Minz, S., Jain R., Rough Set based Decision Tree model for
Classification, In Procd. 51h International Conference on Data
Warehousing and Knowledge Discovery, DaWaK 2003 Prague,
Czech Republic, September 3-5,2003, LNCS 2737:172-181,2003

[MJ03b] Minz, S., Jain, R., Hybridizing Rough Set Framework for
Classification: An Experimental View, Design and Application of
Hybrid Intelligent Systems A. Abraham et. Al. (Eds.), lOS Press,
631-640, 2003
[MK97] Michalski, R. S., Kaufman K. A., Data Mining and Knowledge

249
Discovery: A Review of Issues and a Multistrategy Approach
Chapter 2, In Michalski, R. S., Bratko, I., Kubat, M. (Eds.),
Machine Learning and Data Mining: Methods and applications,
London, John Wiley & Sons, 1997

[MMHL86] Michalski R., Mozetic I., Hong J., Lavrac N., The AQ15 Inductive
Learning System: An Overview and Experiments, Proceeding of
!MAL, Orsay 1986
[Modr93] Modrzejewski, M., Feature Selection using Rough Sets Theory,
Procd. of European Conference on Machine Learning, Lecture
Notes in Artificial Intelligence, 667, Springer-Verlag, U.K 213-226,
1993

[MPM02] Mitra, S., Pal, S. K, Mitra, P., Data Mining in Soft Computing
Framework: A Survey, IEEE Transactions on Neural Networks,
13(1), 2002

[MPR04] Mishra, A. K., Prakash, 0., Ramasubramanian V., Forewarning


Powdery Mildew Caused by Oidium Mangiferae in Mango
(Mangifera Indica) using Logistic Regression Models, Indian
Journal of Agricultural Sciences, 74(2): 84-87, 2004
'
[MR03a] Mieszkowicz-Rolka, A., Rolka, L., Variable Precision Rough Sets
in Analysis of Inconsistent Decision Tables. In: Rutkowski, L.,
Kacprzyk, J. (Eds.): Advances in Soft Computing. Physica-Verlag,
Heidelberg, 304-309, 2003

[MR03b] Mieszkowicz-Rolka, A., Rolka, L., Variable Precision Rough Sets.


Evaluation of Human Operator's Decision Model. In: SoAldek, J.,
Drobiazgiewicz, L. (Eds.): Artificial Intelligence and Security in
Computing Systems, Kluwer, London, 33-40,2003

[MRA95] Mehta, M., Rissanen, J., Agrawal, R., MDL-based Decision Tree
Pruning. In International Conference on Knowledge Discovery in
Databases and Data Mining (KDD-95), Canada, August 1995

[MS96] Mollestad, T., Skowron A., A Rough Set Framework for Mining of
Propositional Default Rules, In LNCS I 079:448-457, 1996

[MSSB94] Murthy, S., Simon, K., Salzberg, S., Beigel, R., OC1: Randomized
Induction of Obliqe Decision Trees, Journal of Artificial
Intelligence Research, 1994

[MTV94] Mannila, H., Toivonen, H., Verkamo, A. J., Efficient Algorithms


for Discovering Association Rules, In AAAI workshop on KDD,
181-192, 1994

[Mur95] Murthy, S. K., On Growing Better Decision Trees froin Data, Ph.D.
Thesis, Department of Computer Science, Johns Hopkins
University, Baltimore, Maryland, 1995

[Mur98] Murthy, S. K.: Automatic Construction of decision trees from Data:

250
A Multidisciplinary Survey, Data Mining and Knowledge
Discovery 2:345-389, 1998

[Ngu97] Nguyen, H. S., Discretization of Real Value Attributes: A


Boolean Reasoning Approach, Ph.D. thesis, 1997

[Nib86] Niblett T., Constructing Decision Trees in Noisy Domains, Expert


Systems UK, 1986

[NN97] Nguyen H. S., Nguyen S. H., Discretization methods with


Backtracking, In Proceedings of the 5th European Congress on
Intelligent Techniques and Soft Computing EUFIT'97, Aachen,
Germany, 201-205, 1997

[NN98] Nguyen, H. S., Nguyen, S. H., Discretization Methods for Data


Mining, In L. Polkowski, A. Skowron (Eds.), Rough Sets in
Knowledge Discovery, Physica-Verlag, Heidelberg, 451-482, 1998

[Ohr99] Ohm, A., Discernibility and Rough Sets in Medicine: Tools and
Applications, Ph.D. thesis, Norwegian University of Science and
Technology, Department of Computer and Information Science,
NTNU report 1999:133, 1999

[OK96] 0hrn, A., Komorowski J., Rosetta-A Rough Set Toolkit For
Analysis of Data, Technical Report, Department of Computer
Systems, Norwegian University of Science and Technology, 1996

[Par98] Parsons, S., Current Approaches to Handling Imperfect Information


in Data and Knowledge Bases, IEEE TKDE 10(5): 862, 1998

[Paw01] Pawlak, Z., Drawing Conclusions from Data-The Rough Set Way,
International Journal of Intelligent Systems, 16:3-11, 2001

[Paw91] Pawlak, Z., Rough Sets-Theoretical Aspects of Reasoning about


Data, Kluwer Academic Publishers, Dordecht 1991

[PF97] Provost, F., Fawcett, T., Analysis and Visualization of Classifier


Performance: Comparison under Imprecise Class and Cost
Distributions, In Procd third International Conference on
Knowledge Discovery and Data Mining, AAAI Press, 1997
[Pfa95] Pfahringer, B., Compression Based Discretization Of Continuous
Attributes, In Proceedings of twelfth International Conference on
Machine Learning, Morgan Kaufmann, 1995
[Prit95] Pritchard P., A Simple Subquadratic Algorithm for Computing the
Subset Partial Order, Information Processing Letters, 56:337-341,
1995

[PS98] Polkowski, L., Skowron, A., Rough Sets in Knowledge Discovery 1


·and 2, Heidelberg, Germany: Physica-Verlag, 1998

[PS99] Pal, S. K., Skowron A. (Eds.), Rough Fuzzy Hybridization: A new

251
Trend in Decision Making, Singapore, Springer-Verlag, 1999

[PujOO] Pujari, A. K., Data Mining Techniques, Universities Press, 2000

[PWZ88] Pawlak, Z., Wong, S. K. M., Ziarko, W., Rough Sets: Probabilistic
Versus Deterministic Approach, International Journal of Man-
Machine Studies 29:81-95, 1988

[Qui86] Quinlan, J. R., Learning from Noisy Data, Machine Learning 2,


Michalski, R., Carbonell J., Mitchell, T. (Eds.), Palo Alto, CA:
Tioga, 1986

[Qui87] Quinlan, J. R., Simplifying Decision Trees, International Journal of


Man-Machine Studies, 27:221-234, 1987

[Qui93] Quinlan, J. R.: C4.5: Programs for Machine Learning, Morgan


Kauffman 1993

[Ris85] Rissanen, J., The Minimum Description Length Principle, In Kotz,


S., Johnson, N. L. (Eds.), Encyclopedia of Statistical Sciences, John
Wiley, New York, 5:523-527, 1985

[Ros] Rosetta, Rough set toolkit for analysis of data available at


http:/lwww. idi. ntnu. no/-aleks/rosetta/

[SA95] Srikant R., Agrawal R., Mining Generalized Association Rules, In


Proceedings of 21st VLDB Conference, 1995

[Salz97] Salzberg, S., L., On Comparing Classifiers: Pitfalls to Avoid and a


Recommended Approach, Data Mining and Knowledge Discovery,
Kluwer Academic Publisher 1(3):317-328, 1997
[SAM96] Shafer, J., Agrawal, R., Mehta, M., SPRINT: A Scalable Parallel
Classifier for Data Mining, In Procd. 22nd Int. Conf Very Large
Databases, VLDB, 1996

[Sch97] Schapire, R. E., Using Output Codes to Boost Multi-class Learning


Problems, In Proceed. of the 14th International. Machine Learning
Conference, 1997

[SH97] Son, N. H., Hoa, N. S., Some Efficient Algorithms for Rough Set
Methods, Warsaw University, 1997
[Sha48] Shanon, C., A Mathematical Theory of Communication, The Bell
Systems Technical Journal, 27:379-423, 623-656, 1948
[SHMT93] ,Sutherland, A., Henery, R., Molina, R. , Taylor, C. C. King, R.,
Statistical Methods in Learning, Bouchon-Meunier, B., Valverde,
L., Yager, R. R. (Eds.) LNCS Springer-Verlag, 682, 1993

[SJ76] Spatz, C., Johnston J. 0., Basic Statistics: Tales of Distributions,


Brooks/Cole Publishing Company, Monterey, California, 1976
[Sko95] Skowron, A., Extracting Laws from Decision Tables-A Rough Set

252
Approach, Computational Intelligence, 11:371-388, 1995
[SP97] Skowron A., Polkowski, L., Synthesis of Decision Systems from
Data Tables, In Lin, T. Y., Cercone, N. (Eds.), Rough Sets and Data
Mining, Analysis oflmprecise Data, Kluwer, 259-300, 1997
[SPWOO] Seewald, A. K., Petrak, J., Wilmer, G., Hybrid Decision Tree
Learners with Alternative Leaf Classifiers: An Empirical Study,
available at citeseer.ist.psu.edu/422774.html, 2000

[SR92] Skowron, A., Rauszer, C., The Discernibility Matrices and


Functions in Information Systems, In Slowinski, R. (Eds.),
Handbook of Applications and Advances of the Rough Set Theory,
Kluwer Academic Publishers, 1992

[SS95] Skowron, A., Son, N. H., Quantization of Real value Attributes:


Rough Set and Boolean Reasoning Approach, In Proceedings of the
second International Joint Conference on Information Sciences,
Wrightswille Beach, NC, USA, 34-37, 1995
[Ste01] Stefanowski, J., Multiple and Hybrid Classifiers in Formal Methods
and Intelligent Techniques in Control, L. Polkowski (Ed.), Decision
Making, Multimedia and Robotics Warsaw, 174-188,2001

[SYKPOO] Shin, K. C., Yun U. T., Kim, H. K., Park, S. C., A Hybrid
Approach of Neural Network and Memory Based Learning to Data
Mining, IEEE T. on Neural Network, 11(3):637-646, 2000

[SZ95] Shan N. and Ziarko, W., Data-Based Acquisition and Incremental


Modification of Classification Rules, Computational. Intelligence
11:371-388, 1995

[Tin94] Ting, K. M., Discretization Of Continuous Valued Attributes and


Instance Based Learning, TR 491, University of Sydney, 1994

[TQT96] Tan, C. L., Quah, T. S. and Teh, H. H., An Artificial Neural


Network that models Human Decision making, IEEE Computer,
64-70, 1996
[Utg88] Utgoff, P. E., Perceptron Trees, A Case Study in Hybrid Concept
Representations, In Procd. of the 7th National Conference on AI,
Los Altos/San Francisco, 601-605, 1988
[VD94] Vafaie, H., DeJong, K., Improving a Rule Induction System Using
Genetic Algorithms, Michalski, R. and Tecuci, G. (Eds.) Machine
Learning: A Multistrategy Approach Chapter 17, Vol IV, San
Francisco: Morgan Kaufmann, 1994

[VOOO] Vinterbo, S., Ohm, A., Minimal Approximate Hitting Sets and Rule
Templates, International Journal of Approximate Reasoning,
25(2): 123-143, 2000

[Wan01] Wani, M.A., SAFARI: A Structured Approach For Automatic Rule


Induction, IEEE Transaction on Systems Man and Cybernetics,

253
31(4):650-657, 2001

[WF99] Witten, I. H., Frank E., Data Mining: Practical Machine


Learning Tools and Techniques with Java Implementations,
Morgan Kaufmann Publishers, 1999

[Win92] Winston, P. H., Artificial Intelligence, Addison-Wesley, 1992

[Wro95] Wroblewski, J., Finding Minimal Reduct using Genetic Algorithms,


Warsaw University of Technology-Institute of Computer Science-
Reports -16/95, 1995

[Wro98] Wroblewski, J., Genetic Algorithms in Decomposition and


Classification Problems. In [PS98], 472-492, 1998

[Wu95] Wu, X., Knowledge Acquisition from Database, Ablex Publishing


Corp., USA, 1995

[YW02] Yang, Y., Web, G. I., A Comparative Study of Discretization


Methods for NaYve-Bayes Classifiers, In proceedings of PKAW,
Japan,2002

[YW02] Yang, Y., Webb, G. I., A Comparative Study of Discretization


Methods for NaYve-Bayes Classifiers, In Proc. ·of the 2002 Pacific
Rim Knowledge Acquisition Workshop, Japan, 159-173, 2002

[Zad89] Zadeh, L. A., Knowledge Representation in Fuzzy Logic, IEEE


TKDE, 1(1):89-99, 1989

[ZakOO] Zaki, M. J., Scalable Algorithms for Association Rules Mining ,


IEEE ITKDE 12(3):372-390, 2000

[ZiaOI] Ziarko, W., Probabilistic Decision Tables in the Variable Precision


Rough Set Model, Computational Intelligence, 17(3), 2001

[Zia93a] Ziarko, W., Analysis of Uncertain Information in the Framework of


Variable Precision Rough Sets, Foundations of Computing and
Decision Sciences, 18 (3-4):381-396, 1993

[Zia93b] Ziarko, W., Variable Precision Rough Set Model, Journal. of


Computer and System Sciences, 46:39-59, 1993

[ZWC03] Zhu, X., Wu, X., Chen, Q., Eliminating Class Noise in Large
Datasets, In Proceedings of the Twentieth International Conference
on Machine Learning (ICML-03), Washington, 2003

254
List of Publications

1. Sonajharia Minz, Rajni Jain, Rough Set based Decision Tree model for

Classification, In proceedings of 51h International Conference on Data

Warehousing and Knowledge Discovery, DaWaK 2003 Prague, Czech

Republic, September 3-5,2003, LNCS 2737:172-181,2003

2. Rajni Jain, Sonajharia Minz, Classifying Mushrooms in the Hybridized

Rough Sets Framework, In proceedings of ls1 Indian International

Conference on Artificial intelligence (JICAI-03), December 2003, Hyderabad,

India, 554-567, 2003

3. Sonajharia Minz, Rajni Jain, Hybridizing Rough set framework for

Classification: An Experimental View, In proceedings of 3rd International

Conference on Hybrid Intelligent Systems, 14-17 December 2003, Monash

University Melbourne, Australia, Design and Application of Hybrid Intelligent

Systems A. Abraham et. AI. (Eds.), lOS Press, 631-640, 2003

4. Rajni Jain, Sonajharia Minz, Should Decision Trees be Learned Using

Rough Sets?, In proceedings of F 1 Indian International Conference on

Artificial Intelligence (IICAI-03), December 2003, Hyderabad, India, 1466-

1479,2003

255
5. Sonajharia Minz, Rajni Jain, Refining Decision Tree Classifiers using

Rough Set Tools, International Journal of Hybrid Intelligent System, 2(2),

2005 (accepted)

6. Rajni Jain, Sonajharia Minz, P. Adhiguru, Rough Set based Decision Tree

for Mining Rules: Poverty Alleviation through Rural Employment,

presented in 7th Annual conference of the society of Statistics and Computer

Application, Sri Venkateswar College, New Delhi December 22-24, 2004

7. Rajni Jain, Sonajharia Minz, Dynamic RDT Model for Data Mining, 2nd

Indian International Conference on Artificial Intelligence (IICAI-05), India,

2005 (submitted)

256

You might also like