SIAM Journal on Scientific and Statistical Computing
Mining Numerical Data--A Rough Set Approach
RSEISP '07 Proceedings of the international conference on Rough Sets and Intelligent Systems Paradigms
A General Similarity Framework for Horn Clause Logic
Fundamenta Informaticae
A fast outlier detection strategy for distributed high-dimensional data sets with mixed attributes
Data Mining and Knowledge Discovery
Plugging numeric similarity in first-order logic horn clauses comparison
AI*IA'11 Proceedings of the 12th international conference on Artificial intelligence around man and beyond
Mining numerical data – a rough set approach
Transactions on Rough Sets XI
A General Similarity Framework for Horn Clause Logic
Fundamenta Informaticae
UniDis: a universal discretization technique
Journal of Intelligent Information Systems
Automated error detection using association rules
Intelligent Data Analysis
Hi-index | 0.00 |
Discretization, defined as a set of cuts over domains of attributes, represents an important pre-processing task for numeric data analysis. Some Machine Learning algorithms require a discrete feature space but in real-world applications continuous attributes must be handled. To deal with this problem many supervised discretization methods have been proposed but little has been done to synthesize unsupervised discretization methods to be used in domains where no class information is available. Furthermore, existing methods such as (equal-width or equal-frequency) binning, are not well-principled, raising therefore the need for more sophisticated methods for the unsupervised discretization of continuous features. This paper presents a novel unsupervised discretization method that uses non-parametric density estimators to automatically adapt sub-interval dimensions to the data. The proposed algorithm searches for the next two sub-intervals to produce, evaluating the best cut-point on the basis of the density induced in the sub-intervals by the current cut and the density given by a kernel density estimator for each sub-interval. It uses cross-validated log-likelihood to select the maximal number of intervals. The new proposed method is compared to equal-width and equal-frequency discretization methods through experiments on well known benchmarking data.