Tolerating noisy, irrelevant and novel attributes in instance-based learning algorithms
International Journal of Man-Machine Studies - Special issue: symbolic problem solving in noisy and novel task environments
Numerical recipes in C (2nd ed.): the art of scientific computing
Numerical recipes in C (2nd ed.): the art of scientific computing
C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning - Special issue on applications in molecular biology
Introduction to Modern Information Retrieval
Introduction to Modern Information Retrieval
Average-case analysis of a nearest neighbor algorthim
IJCAI'93 Proceedings of the 13th international joint conference on Artifical intelligence - Volume 2
Learning of Variability for Invariant Statistical Pattern Recognition
EMCL '01 Proceedings of the 12th European Conference on Machine Learning
Dimensional reduction effects of feature vectors by coefficients of determination
CIS'04 Proceedings of the First international conference on Computational and Information Science
Hi-index | 0.00 |
Many learning algorithms make an implicit assumption that all the attributes present in the data are relevant to a learning task. However, several studies have demonstrated that this assumption rarely holds; for many supervised learning algorithms, the inclusion of irrelevant or redundant attributes can result in a degradation in classification accuracy. While a variety of different methods for dimensionality reduction exist, many of these are only appropriate for datasets which contain a small number of attributes (e.g.