C4.5: programs for machine learning
C4.5: programs for machine learning
Variable precision rough set model
Journal of Computer and System Sciences
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Principles of data mining
Discretization: An Enabling Technique
Data Mining and Knowledge Discovery
Feature Selection via Discretization
IEEE Transactions on Knowledge and Data Engineering
A Modified Chi2 Algorithm for Discretization
IEEE Transactions on Knowledge and Data Engineering
Class-Dependent Discretization for Inductive Learning from Continuous and Mixed-Mode Data
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
CMP: A Fast Decision Tree Classifier Using Multivariate Predictions
ICDE '00 Proceedings of the 16th International Conference on Data Engineering
The representational power of discrete bayesian networks
The Journal of Machine Learning Research
IEEE Transactions on Knowledge and Data Engineering
Khiops: A Statistical Discretization Method of Continuous Attributes
Machine Learning
CLIP4: hybrid inductive machine learning algorithm that generates inequality rules
Information Sciences: an International Journal - Special issue: Soft computing data mining
An Extended Chi2 Algorithm for Discretization of Real Value Attributes
IEEE Transactions on Knowledge and Data Engineering
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
A global optimal algorithm for class-dependent discretization of continuous data
Intelligent Data Analysis
A discretization algorithm based on Class-Attribute Contingency Coefficient
Information Sciences: an International Journal
Data Discretization Unification
ICDM '07 Proceedings of the 2007 Seventh IEEE International Conference on Data Mining
A Non-parametric Semi-supervised Discretization Method
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
Unsupervised discretization using kernel density estimation
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
SCALCOM-EMBEDDEDCOM '09 Proceedings of the 2009 International Conference on Scalable Computing and Communications; Eighth International Conference on Embedded Computing
ChiMerge: discretization of numeric attributes
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
Unsupervised discretization using tree-based density estimation
PKDD'05 Proceedings of the 9th European conference on Principles and Practice of Knowledge Discovery in Databases
Efficient algorithms to monitor continuous constrained k nearest neighbor queries
DASFAA'10 Proceedings of the 15th international conference on Database Systems for Advanced Applications - Volume Part I
Hi-index | 0.00 |
Discretization techniques have played an important role in machine learning and data mining as most methods in such areas require that the training data set contains only discrete attributes. Data discretization unification (DDU), one of the state-of-the-art discretization techniques, trades off classification errors and the number of discretized intervals, and unifies existing discretization criteria. However, it suffers from two deficiencies. First, the efficiency of DDU is very low as it conducts a large number of parameters to search good results, which does not still guarantee to obtain an optimal solution. Second, DDU does not take into account the number of inconsistent records produced by discretization, which leads to unnecessary information loss. To overcome the above deficiencies, this paper presents a Uni versal Dis cretization technique, namely UniDis. We first develop a non-parametric normalized discretization criteria which avoids the effect of relatively large difference between classification errors and the number of discretized intervals on discretization results. In addition, we define a new entropy-based measure of inconsistency for multi-dimensional variables to effectively control information loss while producing a concise summarization of continuous variables. Finally, we propose a heuristic algorithm to guarantee better discretization based on the non-parametric normalized discretization criteria and the entropy-based inconsistency. Besides theoretical analysis, experimental results demonstrate that our approach is statistically comparable to DDU evaluated by a popular statistical test and it yields a better discretization scheme which significantly improves the accuracy of classification than previously other known discretization methods except for DDU by running J4.8 decision tree and Naive Bayes classifier.