C4.5: programs for machine learning
C4.5: programs for machine learning
Beyond uniformity and independence: analysis of R-trees using the concept of fractal dimension
PODS '94 Proceedings of the thirteenth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems
Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Machine Learning
Feature Selection for Knowledge Discovery and Data Mining
Feature Selection for Knowledge Discovery and Data Mining
Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Efficient Feature Selection via Analysis of Relevance and Redundancy
The Journal of Machine Learning Research
Hi-index | 0.00 |
Feature Subsect Selection is an important issue in machine learning, since non-representative features may reduce accuracy and comprehensibility of hypotheses induced by supervised learning algorithms. Feature Subsect Selection is applied as data pre-processing step, which aims to find a subset of features that describes well the data to be used as input to the inducer. Several approaches to this problem have been proposed, among them the filter approach. This work proposes a filter which uses Fractal Dimension as importance criterion to select a subset of features from the original data. Empirical results on real world data sets are presented. Performance comparison of the proposed criterion with two other criteria frequently considered within the filter approach shows that Fractal Dimension is an appropriated criteria to select features for supervised learning.