Error reduction through learning multiple descriptions
Machine Learning
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Meta-Learning by Landmarking Various Learning Algorithms
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
On Data and Algorithms: Understanding Inductive Performance
Machine Learning
Ensemble selection from libraries of models
ICML '04 Proceedings of the twenty-first international conference on Machine learning
An analysis of diversity measures
Machine Learning
Characteristic-Based Clustering for Time Series Data
Data Mining and Knowledge Discovery
Active Selection of Training Examples for Meta-Learning
HIS '07 Proceedings of the 7th International Conference on Hybrid Intelligent Systems
On the Relationships Among Various Diversity Measures in Multiple Classifier Systems
ISPAN '08 Proceedings of the The International Symposium on Parallel Architectures, Algorithms, and Networks
Cross-disciplinary perspectives on meta-learning for algorithm selection
ACM Computing Surveys (CSUR)
New Insights into Learning Algorithms and Datasets
ICMLA '08 Proceedings of the 2008 Seventh International Conference on Machine Learning and Applications
On learning algorithm selection for classification
Applied Soft Computing
Metalearning: Applications to Data Mining
Metalearning: Applications to Data Mining
Evaluation of diversity measures for binary classifier ensembles
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
Automatic selection of classification learning algorithms for data mining practitioners
Intelligent Data Analysis
Behavior-based clustering and analysis of interestingness measures for association rule mining
Data Mining and Knowledge Discovery
Hi-index | 0.00 |
We argue the value of unsupervised metalearning and discuss the attendant necessity of suitable similarity, or distance, functions. We leverage the notion of diversity among learners used in ensemble learning to design a distance function for the clustering of learning algorithms. We revisit the most popular measures of diversity and show that only one of them, Classifier Output Difference COD is a metric. We then use COD to produce a clustering of 21 learning algorithms, and show how this clustering differs from a clustering based on accuracy, and how it can be used to highlight interesting, sometimes unexpected, similarities among learning algorithms.