Empirical Learning as a Function of Concept Character
Machine Learning
Instance-Based Learning Algorithms
Machine Learning
Neural networks and the bias/variance dilemma
Neural Computation
Machine learning, neural and statistical classification
Machine learning, neural and statistical classification
Scaling to domains with irrelevant features
Computational learning theory and natural learning systems: Volume IV
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Characterization of Classification Algorithms
EPIA '95 Proceedings of the 7th Portuguese Conference on Artificial Intelligence: Progress in Artificial Intelligence
Data Mining using MLC++, A Machine Learning Library in C++
ICTAI '96 Proceedings of the 8th International Conference on Tools with Artificial Intelligence
The lack of a priori distinctions between learning algorithms
Neural Computation
Fusion of Meta-knowledge and Meta-data for Case-Based Model Selection
PKDD '01 Proceedings of the 5th European Conference on Principles of Data Mining and Knowledge Discovery
Feature Selection for Meta-learning
PAKDD '01 Proceedings of the 5th Pacific-Asia Conference on Knowledge Discovery and Data Mining
UCI++: Improved Support for Algorithm Selection Using Datasetoids
PAKDD '09 Proceedings of the 13th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Combining meta-learning and active selection of datasetoids for algorithm selection
HAIS'11 Proceedings of the 6th international conference on Hybrid artificial intelligent systems - Volume Part I
Uncertainty sampling-based active selection of datasetoids for meta-learning
ICANN'11 Proceedings of the 21st international conference on Artificial neural networks - Volume Part II
Hi-index | 0.00 |
Selecting the most appropriate learning algorithm for a given task has become a crucial research issue since the advent of multiparadigm data mining tool suites. To address this issue, researchers have tried to extract dataset characteristics which might provide clues as to the most appropriate learning algorithm. We propose to extend this research by extracting inducer profiles, i.e., sets of metalevel features which characterize learning algorithms from the point of view of their representation and functionality, efficiency, practicality, and resilience. Values for these features can be determined on the basis of author specifications, expert consensus or previous case studies. However, there is a need to characterize learning algorithms in more quantitative terms on the basis of extensive, controlled experiments. This paper illustrates the proposed approach and reports empirical findings on one resilience-related characteristic of learning algorithms for classification, namely their tolerance to irrelevant variables in training data.